willow-inference-server VS faster-whisper

Compare willow-inference-server vs faster-whisper and see what are their differences.

willow-inference-server

Open source, local, and self-hosted highly optimized language inference server supporting ASR/STT, TTS, and LLM across WebRTC, REST, and WS (by toverainc)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
willow-inference-server faster-whisper
7 23
329 9,150
9.4% 11.6%
7.5 8.1
about 1 month ago 8 days ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

willow-inference-server

Posts with mentions or reviews of willow-inference-server. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-27.
  • Brave Leo now uses Mixtral 8x7B as default
    7 projects | news.ycombinator.com | 27 Jan 2024
    I think this perspective comes from a lack of historical experience and hands-on experience overall.

    Nvidia more broadly has very impressive support for their GPUs. If you look at the support lifecycles for their Jetson hardware over time it's significantly worse. I encourage you to look at what support lifecycles have looked like, with the most "egregious" example being dropping of support for the Jetson Nano in from what I recall was within a couple of years.

    Another consideration - Jetson is optimized for power efficiency/form-factor and on a per $ basis CUDA performance is terrible. The power efficiency and form-factor come at significant cost. See this discussion from one of my projects[0]. I evaluated the use of WIS on an Orin that I have and from what I can recall it was significantly slower than a GTX 1070 which is... Unimpressive.

    In the end what do I care what people use, I'm offering the perspective and experience of someone who has actually used the Jetson line for many years and frequently struggled with all of these issues and more.

    [0] - https://github.com/toverainc/willow-inference-server/discuss...

  • Whisper.api: An open source, self-hosted speech-to-text with fast transcription
    5 projects | news.ycombinator.com | 22 Aug 2023
    ctranslate2 is incredible, I don’t know why it doesn’t get more attention.

    We use it for our Willow Inference Server which has an API that can be used directly like OP project and supports all Whisper models, TTS, etc:

    https://github.com/toverainc/willow-inference-server

    The benchmarks are pretty incredible (largely thanks to ctranslate2).

  • Show HN: Project S.A.T.U.R.D.A.Y – open-source, self hosted, J.A.R.V.I.S
    7 projects | news.ycombinator.com | 2 Jul 2023
    Nice! I'm the creator of Willow[0] (which has been mentioned here).

    First of all, we love seeing efforts like this and we'd love to work together with other open source voice user interface projects! There's plenty of work to do in the space...

    I have roughly two decades of experience with voice and one thing to keep in mind is how latency sensitive voice tasks are. Generally speaking when it comes to conversational audio people have very high expectations regarding interactivity. For example, in the VoIP world we know that conversation between people starts getting annoying at around 300ms of latency. Higher latencies for voice assistant tasks are more-or-less "tolerated" but latency still needs to be extremely low. Alexa/Echo (with all of its problems) is at least a decent benchmark for what people expect for interactivity and all things considered it does pretty well.

    I know you're early (we are too!) but in your demo I counted roughly six seconds of latency between the initial hello and response (and nearly 10 for "tell me a joke"). In terms of conversational voice this feels like an eternity. Again, no shade at all (believe me I understand more than most) but just something I thought I'd add from my decades of experience with humans and voice. This is why we have such heavy emphasis on reducing latency as much as possible.

    For an idea of just how much we emphasize this you can try our WebRTC demo[1] which can do end-to-end (from click stop record in browser to ASR response) in a few hundred milliseconds (with Whisper large-v2 and beam size 5 - medium/1 is a fraction of that) including internet latency (it's hosted in Chicago, FYI).

    Running locally with WIS and Willow we see less than 500ms from end of speech (on device VAD) to command execution completion and TTS response with platforms like Home Assistant. Granted this is with GPU so you could call it cheating but a $100 six year old Nvidia Pascal series GPU runs circles around the fastest CPUs for these tasks (STT and TTS - see benchmarks here[2]). Again, kind of cheating but my RTX 3090 at home drops this down to around 200ms - roughly half of that time is Home Assistant. It's my (somewhat controversial) personal opinion that GPUs are more-or-less a requirement (today) for Alexa/Echo competitive responsiveness.

    Speaking of latency, I've been noticing a trend with Willow users regarding LLMs - they are very neat, cool, and interesting (our inference server[3] supports LLamA based LLMs) but they really aren't the right tool for these kinds of tasks. They have very high memory requirements (relatively speaking), require a lot of compute, and are very slow (again, relatively speaking). They also don't natively support the kinds of API call/response you need for most voice tasks. There are efforts out there to support this with LLMs but frankly I find the overall approach very strange. It seems that LLMs have sucked a lot of oxygen out of the room and people have forgotten (or never heard of) "good old fashioned" NLU/NLP approaches.

    Have you considered an NLU/NLP engine like Rasa[4]? This is the approach we will be taking to implement this kind of functionality in a flexible and assistant platform/integration agnostic way. By the time you stack up VAD, STT, understanding user intent (while allowing flexible grammar), calling an API, execution, and TTS response latency starts to add up very, very quickly.

    As one example, for "tell me a joke" Alexa does this in a few hundred milliseconds and I guarantee they're not using an LLM for this task - you can have a couple of hundred jokes to randomly select from with pre-generated TTS responses cached (as one path). Again, this is the approach we are taking to "catch up" with Alexa for all kinds of things from jokes to creating calendar entries, etc. Of course you can still have a catch-all to hand off to LLM for "conversation" but I'm not sure users actually want this for voice.

    I may be misunderstanding your goals but just a few things I thought I would mention.

    [0] - https://github.com/toverainc/willow

    [1] - https://wisng.tovera.io/rtc/

    [2] - https://github.com/toverainc/willow-inference-server/tree/wi...

    [3] - https://github.com/toverainc/willow-inference-server

    [4] - https://rasa.com/

  • VLLM: 24x faster LLM serving than HuggingFace Transformers
    3 projects | news.ycombinator.com | 20 Jun 2023
    We run into this constantly with Willow[0] and the Willow Inference Server[1]. There seems to be a large gap in understanding with many users. They seem to find it difficult to understand a fundamental reality: GPUs are so physically different and better suited to many/most ML tasks all the CPU tricks in the world cannot bring CPU even close to the performance of GPUs (while maintaining quality/functionality) for many tasks. I find this interesting because everyone seems to take it as obvious that integrated graphics vs discrete graphics for gaming aren't even close. Ditto for these tasks.

    With Willow Inference Server I'm constantly telling people: a six year old $100 Tesla P4/GTX 1070 walks all over even the best CPUs in the world for our primary task of speech to text/ASR - at dramatically lower cost and power usage. Seriously - a GTX 1070 is at least 5x faster than a Threadripper 5955WX. Our goal is to provide an open-source commercial voice assistant equivalent user experience and that is and will be fundamentally impossible for the foreseeable future on CPU.

    Slight tangent but there are users in the space who seem to be under the impression that they can use their Raspberry Pi for voice assistant/speech recognition. It's not even close to a fair fight but with the same implementation and settings a GTX 1070 is roughly 90x (nearly two orders of magnitude) faster[2] than a Raspberry Pi... Yes, all-in a machine with a GTX 1070 uses and order of magnitude more power (3w vs 30x) than a Raspberry Pi but the power cost in even countries with the most expensive power in the world results in a $2-$3/mo cost difference - which I feel, at least, is a reasonable trade-off considering the dramatic difference in usability (Raspberry Pi is essentially useless - waiting 10-30 seconds for a response makes pulling your phone out faster).

    [0] - https://github.com/toverainc/willow

    [1] - https://github.com/toverainc/willow-inference-server

    [2] - https://github.com/toverainc/willow-inference-server/tree/wi...

  • GGML – AI at the Edge
    11 projects | news.ycombinator.com | 6 Jun 2023
    Shameless plug, I'm the founder of Willow[0].

    In short you can:

    1) Run a local Willow Inference Server[1]. Supports CPU or CUDA, just about the fastest implementation of Whisper out there for "real time" speech.

    2) Run local command detection on device. We pull your Home Assistant entites on setup and define basic grammar for them but any English commands (up to 400) that can be processed by Home Assistant are recognized directly on the $50 ESP BOX device and sent to Home Assistant (or openHAB, or a REST endpoint, etc) for processing.

    Whether WIS or local our performance target is 500ms from end of speech to command executed.

    [0] - https://github.com/toverainc/willow

    [1] - https://github.com/toverainc/willow-inference-server

  • Show HN: Willow Inference Server: Optimized ASR/TTS/LLM for Willow/WebRTC/REST
    3 projects | news.ycombinator.com | 23 May 2023
    Thanks!

    Yes, "realtime multiple" is audio/speech length divided by actual inference time.

    You got it! The demo video is showing the slowest response times because it is using the highest quality/accuracy settings available with Whisper (large-v2, beam 5). Willow devices use medium 1 by default for comparison and those responses are measured in the 500 milliseconds or less range (again depending on speech length) across a wide variety of new and old CUDA hardware. Some sample benchmarks here[0].

    Applications using WIS (including Willow) can provide model settings on a per-request basis to balance quality vs latency depending on the task.

    [0] - https://github.com/toverainc/willow-inference-server#benchma...

faster-whisper

Posts with mentions or reviews of faster-whisper. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-29.
  • Creando Subtítulos Automáticos para Vídeos con Python, Faster-Whisper, FFmpeg, Streamlit, Pillow
    7 projects | dev.to | 29 Apr 2024
    Faster-whisper (https://github.com/SYSTRAN/faster-whisper)
  • Using Groq to Build a Real-Time Language Translation App
    3 projects | dev.to | 5 Apr 2024
    For our real-time STT needs, we'll employ a fantastic library called faster-whisper.
  • Apple Explores Home Robotics as Potential 'Next Big Thing'
    3 projects | news.ycombinator.com | 4 Apr 2024
    Thermostats: https://www.sinopetech.com/en/products/thermostat/

    I haven't tried running a local text-to-speech engine backed by an LLM to control Home Assistant. Maybe someone is working on this already?

    TTS: https://github.com/SYSTRAN/faster-whisper

    LLM: https://github.com/Mozilla-Ocho/llamafile/releases

    LLM: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-D...

    It would take some tweaking to get the voice commands working correctly.

  • Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
    10 projects | news.ycombinator.com | 13 Dec 2023
    Could someone elaborate how is this accomplished and is there any quality disparity compared to original whisper?

    Repos like https://github.com/SYSTRAN/faster-whisper makes immediate sense about why it's faster than the original, but this one, not so much, especially considering it's even much faster.

  • Now I Can Just Print That Video
    5 projects | news.ycombinator.com | 4 Dec 2023
    Cool! I had the same project idea recently. You may be interested in this for the step of speech2text: https://github.com/SYSTRAN/faster-whisper
  • Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
    14 projects | news.ycombinator.com | 31 Oct 2023
    That's the implication. If the distil models are same format as original openai models then the Distil models can be converted for faster-whisper use as per the conversion instructions on https://github.com/guillaumekln/faster-whisper/

    So then we'll see whether we get the 6x model speedup on top of the stated 4x faster-whisper code speedup.

  • AMD May Get Across the CUDA Moat
    8 projects | news.ycombinator.com | 6 Oct 2023
    > While I agree that it's much more effort to get things working on AMD cards than it is with Nvidia, I was a bit surprised to see this comment mention Whisper being an example of "5-10x as performant".

    It easily is. See the benchmarks[0] from faster-whisper which uses Ctranslate2. That's 5x faster than OpenAI reference code on a Tesla V100. Needless to say something like a 4080 easily multiplies that.

    > https://www.tomshardware.com/news/whisper-audio-transcriptio... is a good example of Nvidia having no excuses being double the price when it comes to Whisper inference, with 7900XTX being directly comparable with 4080, albeit with higher power draw. To be fair it's not using ROCm but Direct3D 11, but for performance/price arguments sake that detail is not relevant.

    With all due respect to the author of the article this is "my first entry into ML" territory. They talk about a 5-10 second delay, my project can do sub 1 second times[1] even with ancient GPUs thanks to Ctranslate2. I don't have an RTX 4080 but if you look at the performance stats for the closest thing (RTX 4090) the performance numbers are positively bonkers - completely untouchable for anything ROCm based. Same goes for the other projects I linked, lmdeploy does over 100 tokens/s in a single session with LLama2 13b on my RTX 4090 and almost 600 tokens/s across eight simultaneous sessions.

    > EDIT: Also using CTranslate2 as an example is not great as it's actually a good showcase why ROCm is so far behind CUDA: It's all about adapting the tech and getting the popular libraries to support it. Things usually get implemented in CUDA first and then would need additional effort to add ROCm support that projects with low amount of (possibly hobbyist) maintainers might not have available. There's even an issue in CTranslate2 where they clearly state no-one is working to get ROCm supported in the library. ( https://github.com/OpenNMT/CTranslate2/issues/1072#issuecomm... )

    I don't understand what you're saying here. It (along with the other projects I linked) are fantastic examples of just how far behind the ROCm ecosystem is. ROCm isn't even on the radar for most of them as your linked issue highlights.

    Things always get implemented in CUDA first (ten years in this space and I've never seen ROCm first) and ROCm users either wait months (minimum) for sub-par performance or never get it at all.

    [0] - https://github.com/guillaumekln/faster-whisper#benchmark

    [1] - https://heywillow.io/components/willow-inference-server/#ben...

  • Open Source Libraries
    25 projects | /r/AudioAI | 2 Oct 2023
    guillaumekln/faster-whisper
  • Whisper Turbo: transcribe 20x faster than realtime using Rust and WebGPU
    3 projects | news.ycombinator.com | 12 Sep 2023
    Neat to see a new implementation, although I'll note that for those looking for a drop-in replacement for the whisper library, I believe that both faster-whisper https://github.com/guillaumekln/faster-whisper and https://github.com/m-bain/whisperX are easier (PyTorch-based, doesn't require a web browser), and a lot faster (WhisperX is up to 70X realtime).
  • Whisper.api: An open source, self-hosted speech-to-text with fast transcription
    5 projects | news.ycombinator.com | 22 Aug 2023
    One caveat here is that whisper.cpp does not offer any CUDA support at all, acceleration is only available for Apple Silicon.

    If you have Nvidia hardware the ctranslate2 based faster-whisper is very very fast: https://github.com/guillaumekln/faster-whisper

What are some alternatives?

When comparing willow-inference-server and faster-whisper you can also consider the following projects:

willow - Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative

whisper.cpp - Port of OpenAI's Whisper model in C/C++

whisper-realtime - Whisper runs in realtime on a laptop GPU (8GB)

whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

whisper.api - This project provides an API with user level access support to transcribe speech to text using a finetuned and processed Whisper ASR model.

stable-ts - Transcription, forced alignment, and audio indexing with OpenAI's Whisper

wscribe-editor - web based editor for subtitles and transcripts

whisper-diarization - Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper

MeZO - [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333

ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]

ggml - Tensor library for machine learning