wasi-nn
whisper-turbo
wasi-nn | whisper-turbo | |
---|---|---|
3 | 11 | |
402 | 1,569 | |
4.7% | - | |
5.6 | 8.9 | |
7 days ago | 2 months ago | |
Rust | TypeScript | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wasi-nn
-
Self-Hosting Open Source LLMs: Cross Devices and Local Deployment of Mistral 7B
I really like the post that they mention (https://www.secondstate.io/articles/fast-llm-inference/). The reasons for avoiding python all resonate with me. I'm excited to play with WASI-NN (https://github.com/WebAssembly/wasi-nn) and that rust code is very readable to load up a GGUL model.
-
Run LLMs on my own Mac fast and efficient Only 2 MBs
Mmm…
The wasm-nn that this relies on (https://github.com/WebAssembly/wasi-nn) is a proposal that relies of arbitrary plugin backends sending arbitrarily chunks to some vendor implementation. The api is literally like set input, compute, set output.
…and that is totally non portable.
The reason this works, is because it’s relying on the abstraction already implemented in llama.cpp that allows it to take a gguf model and map it to multiple hardware targets,which you can see has been lifted here: https://github.com/WasmEdge/WasmEdge/tree/master/plugins/was...
So..
> Developers can refer to this project to write their machine learning application in a high-level language using the bindings, compile it to WebAssembly, and run it with a WebAssembly runtime that supports the wasi-nn proposal, such as WasmEdge.
Is total rubbish; no, you can’t.
This isn’t portable.
It’s not sandboxed.
If you have a wasm binary you might be able to run it if the version of the runtime you’re using happens to implement the specific ggml backend you need, which it probably doesn’t… because there’s literally no requirement for it to do so.
There’s a lot of “so portable” talk in this article which really seems misplaced.
-
The Promise of WASM
in machine learning (https://github.com/WebAssembly/wasi-nn)
whisper-turbo
- Whisper Turbo: speech recognition in the browser using WebGPU
-
Show HN: Shadeup – A language that makes WebGPU easier
Even just looking at the ability to accelerate llms in the browser on any device without an installation is awesome
For example: fleetwood.dev has a really cool project that does audio transcription in browser on the GPU: https://whisper-turbo.com/#
- Run Whisper on WebGPU with a few lines of JS
- Run LLMs on my own Mac fast and efficient Only 2 MBs
-
Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
You'd be surprised how capable old GPUs are! I've had great success with people running Whisper-Turbo in the browser on really old hardware: https://whisper-turbo.com/
- Running Whisper on Rust and WebGPU
-
Workers AI: serverless GPU-powered inference on Cloudflare’s global network
Whisper large is only 1.5B params, why not run it client side with something like https://github.com/FL33TW00D/whisper-turbo
(Disclaimer: I am the author)
- Whisper Turbo – Run Whisper Directly in the Browser with Rust and WebGPU
- Whisper Turbo: transcribe 20x faster than realtime using Rust and WebGPU
What are some alternatives?
wasmer - 🚀 The leading Wasm Runtime supporting WASIX, WASI and Emscripten
faster-whisper - Faster Whisper transcription with CTranslate2
distroless - 🥑 Language focused docker images, minus the operating system.
WhisperInput - Offline voice input panel & keyboard with punctuation for Android.
wagi - Write HTTP handlers in WebAssembly with a minimal amount of work
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
WasmEdge-WASINN-examples
willow - Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative
discourse-ai
project-2501 - Project 2501 is an open-source AI assistant, written in C++.
get-beam - Run GPU inference and training jobs on serverless infrastructure that scales with you.
whisper.cpp - Port of OpenAI's Whisper model in C/C++