willow-inference-server
whisper.cpp
willow-inference-server | whisper.cpp | |
---|---|---|
7 | 187 | |
343 | 32,543 | |
6.7% | - | |
7.5 | 9.8 | |
18 days ago | 1 day ago | |
Python | C | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
willow-inference-server
-
Brave Leo now uses Mixtral 8x7B as default
I think this perspective comes from a lack of historical experience and hands-on experience overall.
Nvidia more broadly has very impressive support for their GPUs. If you look at the support lifecycles for their Jetson hardware over time it's significantly worse. I encourage you to look at what support lifecycles have looked like, with the most "egregious" example being dropping of support for the Jetson Nano in from what I recall was within a couple of years.
Another consideration - Jetson is optimized for power efficiency/form-factor and on a per $ basis CUDA performance is terrible. The power efficiency and form-factor come at significant cost. See this discussion from one of my projects[0]. I evaluated the use of WIS on an Orin that I have and from what I can recall it was significantly slower than a GTX 1070 which is... Unimpressive.
In the end what do I care what people use, I'm offering the perspective and experience of someone who has actually used the Jetson line for many years and frequently struggled with all of these issues and more.
[0] - https://github.com/toverainc/willow-inference-server/discuss...
-
Whisper.api: An open source, self-hosted speech-to-text with fast transcription
ctranslate2 is incredible, I don’t know why it doesn’t get more attention.
We use it for our Willow Inference Server which has an API that can be used directly like OP project and supports all Whisper models, TTS, etc:
https://github.com/toverainc/willow-inference-server
The benchmarks are pretty incredible (largely thanks to ctranslate2).
-
Show HN: Project S.A.T.U.R.D.A.Y – open-source, self hosted, J.A.R.V.I.S
Nice! I'm the creator of Willow[0] (which has been mentioned here).
First of all, we love seeing efforts like this and we'd love to work together with other open source voice user interface projects! There's plenty of work to do in the space...
I have roughly two decades of experience with voice and one thing to keep in mind is how latency sensitive voice tasks are. Generally speaking when it comes to conversational audio people have very high expectations regarding interactivity. For example, in the VoIP world we know that conversation between people starts getting annoying at around 300ms of latency. Higher latencies for voice assistant tasks are more-or-less "tolerated" but latency still needs to be extremely low. Alexa/Echo (with all of its problems) is at least a decent benchmark for what people expect for interactivity and all things considered it does pretty well.
I know you're early (we are too!) but in your demo I counted roughly six seconds of latency between the initial hello and response (and nearly 10 for "tell me a joke"). In terms of conversational voice this feels like an eternity. Again, no shade at all (believe me I understand more than most) but just something I thought I'd add from my decades of experience with humans and voice. This is why we have such heavy emphasis on reducing latency as much as possible.
For an idea of just how much we emphasize this you can try our WebRTC demo[1] which can do end-to-end (from click stop record in browser to ASR response) in a few hundred milliseconds (with Whisper large-v2 and beam size 5 - medium/1 is a fraction of that) including internet latency (it's hosted in Chicago, FYI).
Running locally with WIS and Willow we see less than 500ms from end of speech (on device VAD) to command execution completion and TTS response with platforms like Home Assistant. Granted this is with GPU so you could call it cheating but a $100 six year old Nvidia Pascal series GPU runs circles around the fastest CPUs for these tasks (STT and TTS - see benchmarks here[2]). Again, kind of cheating but my RTX 3090 at home drops this down to around 200ms - roughly half of that time is Home Assistant. It's my (somewhat controversial) personal opinion that GPUs are more-or-less a requirement (today) for Alexa/Echo competitive responsiveness.
Speaking of latency, I've been noticing a trend with Willow users regarding LLMs - they are very neat, cool, and interesting (our inference server[3] supports LLamA based LLMs) but they really aren't the right tool for these kinds of tasks. They have very high memory requirements (relatively speaking), require a lot of compute, and are very slow (again, relatively speaking). They also don't natively support the kinds of API call/response you need for most voice tasks. There are efforts out there to support this with LLMs but frankly I find the overall approach very strange. It seems that LLMs have sucked a lot of oxygen out of the room and people have forgotten (or never heard of) "good old fashioned" NLU/NLP approaches.
Have you considered an NLU/NLP engine like Rasa[4]? This is the approach we will be taking to implement this kind of functionality in a flexible and assistant platform/integration agnostic way. By the time you stack up VAD, STT, understanding user intent (while allowing flexible grammar), calling an API, execution, and TTS response latency starts to add up very, very quickly.
As one example, for "tell me a joke" Alexa does this in a few hundred milliseconds and I guarantee they're not using an LLM for this task - you can have a couple of hundred jokes to randomly select from with pre-generated TTS responses cached (as one path). Again, this is the approach we are taking to "catch up" with Alexa for all kinds of things from jokes to creating calendar entries, etc. Of course you can still have a catch-all to hand off to LLM for "conversation" but I'm not sure users actually want this for voice.
I may be misunderstanding your goals but just a few things I thought I would mention.
[0] - https://github.com/toverainc/willow
[1] - https://wisng.tovera.io/rtc/
[2] - https://github.com/toverainc/willow-inference-server/tree/wi...
[3] - https://github.com/toverainc/willow-inference-server
[4] - https://rasa.com/
-
VLLM: 24x faster LLM serving than HuggingFace Transformers
We run into this constantly with Willow[0] and the Willow Inference Server[1]. There seems to be a large gap in understanding with many users. They seem to find it difficult to understand a fundamental reality: GPUs are so physically different and better suited to many/most ML tasks all the CPU tricks in the world cannot bring CPU even close to the performance of GPUs (while maintaining quality/functionality) for many tasks. I find this interesting because everyone seems to take it as obvious that integrated graphics vs discrete graphics for gaming aren't even close. Ditto for these tasks.
With Willow Inference Server I'm constantly telling people: a six year old $100 Tesla P4/GTX 1070 walks all over even the best CPUs in the world for our primary task of speech to text/ASR - at dramatically lower cost and power usage. Seriously - a GTX 1070 is at least 5x faster than a Threadripper 5955WX. Our goal is to provide an open-source commercial voice assistant equivalent user experience and that is and will be fundamentally impossible for the foreseeable future on CPU.
Slight tangent but there are users in the space who seem to be under the impression that they can use their Raspberry Pi for voice assistant/speech recognition. It's not even close to a fair fight but with the same implementation and settings a GTX 1070 is roughly 90x (nearly two orders of magnitude) faster[2] than a Raspberry Pi... Yes, all-in a machine with a GTX 1070 uses and order of magnitude more power (3w vs 30x) than a Raspberry Pi but the power cost in even countries with the most expensive power in the world results in a $2-$3/mo cost difference - which I feel, at least, is a reasonable trade-off considering the dramatic difference in usability (Raspberry Pi is essentially useless - waiting 10-30 seconds for a response makes pulling your phone out faster).
[0] - https://github.com/toverainc/willow
[1] - https://github.com/toverainc/willow-inference-server
[2] - https://github.com/toverainc/willow-inference-server/tree/wi...
-
GGML – AI at the Edge
Shameless plug, I'm the founder of Willow[0].
In short you can:
1) Run a local Willow Inference Server[1]. Supports CPU or CUDA, just about the fastest implementation of Whisper out there for "real time" speech.
2) Run local command detection on device. We pull your Home Assistant entites on setup and define basic grammar for them but any English commands (up to 400) that can be processed by Home Assistant are recognized directly on the $50 ESP BOX device and sent to Home Assistant (or openHAB, or a REST endpoint, etc) for processing.
Whether WIS or local our performance target is 500ms from end of speech to command executed.
[0] - https://github.com/toverainc/willow
[1] - https://github.com/toverainc/willow-inference-server
-
Show HN: Willow Inference Server: Optimized ASR/TTS/LLM for Willow/WebRTC/REST
Thanks!
Yes, "realtime multiple" is audio/speech length divided by actual inference time.
You got it! The demo video is showing the slowest response times because it is using the highest quality/accuracy settings available with Whisper (large-v2, beam 5). Willow devices use medium 1 by default for comparison and those responses are measured in the 500 milliseconds or less range (again depending on speech length) across a wide variety of new and old CUDA hardware. Some sample benchmarks here[0].
Applications using WIS (including Willow) can provide model settings on a per-request basis to balance quality vs latency depending on the task.
[0] - https://github.com/toverainc/willow-inference-server#benchma...
whisper.cpp
-
Show HN: I created automatic subtitling app to boost short videos
whisper.cpp [1] has a karaoke example that uses ffmpeg's drawtext filter to display rudimentary karaoke-like captions. It also supports diarisation. Perhaps it could be a starting point to create a better script that does what you need.
--
1: https://github.com/ggerganov/whisper.cpp/blob/master/README....
- LLaMA Now Goes Faster on CPUs
-
LLMs on your local Computer (Part 1)
The ggml library is one of the first library for local LLM interference. It’s a pure C library that converts models to run on several devices, including desktops, laptops, and even mobile device - and therefore, it can also be considered as a tinkering tool, trying new optimizations, that will then be incorporated into other downstream projects. This tool is at the heart of several other projects, powering LLM interference on desktop or even mobile phones. Subprojects for running specific LLMs or LLM families exists, such as whisper.cpp.
-
Voxos.ai – An Open-Source Desktop Voice Assistant
I'm not sure if it is _fully_ openai compatible, but whispercpp has a server bundled that says it is "OAI-like": https://github.com/ggerganov/whisper.cpp/tree/master/example...
I don't have any direct experience with it... I've only played around with whisper locally, using scripts.
-
Jarvis: A Voice Virtual Assistant in Python (OpenAI, ElevenLabs, Deepgram)
unless i'm misunderstanding `whisper.cpp` seems to support streaming & the repository includes a native example[0] and a WASM example[1] with a demo site[2].
[0]: https://github.com/ggerganov/whisper.cpp/tree/master/example...
- Wchess
-
I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now.
Usage 1: Good to transcribe audio. An example use case could be to summarize YouTube videos or long courses. Usage 2: You talk with voice to your AI that responds with text (later with audio too). - https://github.com/ggerganov/whisper.cpp
-
Scrybble is the ReMarkable highlights to Obsidian exporter I have been looking for
🗣️🎙️ whisper.cpp (offline speech-to-text transcription, models trained by OpenAI, CLI based, browser based)
- Whisper.wasm
-
Whisper C++ not working for me. Anyone else?
Has anyone played around with Whisper C++ for swift? I'm hitting a snag even on the demo. I've downloaded the github repo and everything matches up with this video [ https://youtu.be/b10OHCDHDQ4 ] but when he hits the transcribe button, it actually prints out the captioning. When I do it, it skips that part and just says "Done...". But it, does everything else - plays the audio, says it's transcribing.. just doesn't show me the transcription: and it's not in the debug window either. But the demo isn't throwing any errors, and I haven't messed with the code really so this is their example. https://github.com/ggerganov/whisper.cpp
What are some alternatives?
willow - Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative
faster-whisper - Faster Whisper transcription with CTranslate2
whisper-realtime - Whisper runs in realtime on a laptop GPU (8GB)
bark - 🔊 Text-Prompted Generative Audio Model
whisper.api - This project provides an API with user level access support to transcribe speech to text using a finetuned and processed Whisper ASR model.
Whisper - High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
wscribe-editor - web based editor for subtitles and transcripts
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
MeZO - [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
ggml - Tensor library for machine learning
llama.cpp - LLM inference in C/C++