silero-vad
faster-whisper
silero-vad | faster-whisper | |
---|---|---|
10 | 23 | |
2,866 | 8,899 | |
- | 9.1% | |
6.9 | 8.1 | |
11 days ago | 5 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
silero-vad
-
New models and developer products announced at OpenAI DevDay
>How do you detect speech starting and stopping?
https://github.com/snakers4/silero-vad
-
[Discussion] Video Translation Task
you could look into https://github.com/guillaumekln/faster-whisper especially the VAD section (Voice Activity Detector) using https://github.com/snakers4/silero-vad
-
Using Whisper to transcribe the entire Forensic Files series
I also had the same synchronization issue, so I wrote a WebUI/CLI that uses Silero-VAD that first splits the audio whenever there a silent portion (or every 30 seconds), and I haven't experienced it since:
-
Whisper - A new free AI model from OpenAI that can transcribe Japanese (and many other languages) at up to "human level" accuracy
By the way, I've updated the WebUI to now also support using Silero VAD to break up the audio into distinct sections, and run Whisper on each section and then combine them into one single transcript/SRT file.
-
[P] A more detailed post about Silero VAD on The Gradient
The VAD is always available on Github
- Silero VAD: pre-trained enterprise-grade voice activity detector
-
[P] Silero VAD: One voice detector to rule them all
I also pinned some interesting comments here regarding mobile and IOT usage here - https://github.com/snakers4/silero-vad/issues/37
- One voice detector to rule them all
faster-whisper
-
Creando Subtítulos Automáticos para Vídeos con Python, Faster-Whisper, FFmpeg, Streamlit, Pillow
Faster-whisper (https://github.com/SYSTRAN/faster-whisper)
-
Using Groq to Build a Real-Time Language Translation App
For our real-time STT needs, we'll employ a fantastic library called faster-whisper.
-
Apple Explores Home Robotics as Potential 'Next Big Thing'
Thermostats: https://www.sinopetech.com/en/products/thermostat/
I haven't tried running a local text-to-speech engine backed by an LLM to control Home Assistant. Maybe someone is working on this already?
TTS: https://github.com/SYSTRAN/faster-whisper
LLM: https://github.com/Mozilla-Ocho/llamafile/releases
LLM: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-D...
It would take some tweaking to get the voice commands working correctly.
-
Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
Could someone elaborate how is this accomplished and is there any quality disparity compared to original whisper?
Repos like https://github.com/SYSTRAN/faster-whisper makes immediate sense about why it's faster than the original, but this one, not so much, especially considering it's even much faster.
-
Now I Can Just Print That Video
Cool! I had the same project idea recently. You may be interested in this for the step of speech2text: https://github.com/SYSTRAN/faster-whisper
-
Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
That's the implication. If the distil models are same format as original openai models then the Distil models can be converted for faster-whisper use as per the conversion instructions on https://github.com/guillaumekln/faster-whisper/
So then we'll see whether we get the 6x model speedup on top of the stated 4x faster-whisper code speedup.
-
AMD May Get Across the CUDA Moat
> While I agree that it's much more effort to get things working on AMD cards than it is with Nvidia, I was a bit surprised to see this comment mention Whisper being an example of "5-10x as performant".
It easily is. See the benchmarks[0] from faster-whisper which uses Ctranslate2. That's 5x faster than OpenAI reference code on a Tesla V100. Needless to say something like a 4080 easily multiplies that.
> https://www.tomshardware.com/news/whisper-audio-transcriptio... is a good example of Nvidia having no excuses being double the price when it comes to Whisper inference, with 7900XTX being directly comparable with 4080, albeit with higher power draw. To be fair it's not using ROCm but Direct3D 11, but for performance/price arguments sake that detail is not relevant.
With all due respect to the author of the article this is "my first entry into ML" territory. They talk about a 5-10 second delay, my project can do sub 1 second times[1] even with ancient GPUs thanks to Ctranslate2. I don't have an RTX 4080 but if you look at the performance stats for the closest thing (RTX 4090) the performance numbers are positively bonkers - completely untouchable for anything ROCm based. Same goes for the other projects I linked, lmdeploy does over 100 tokens/s in a single session with LLama2 13b on my RTX 4090 and almost 600 tokens/s across eight simultaneous sessions.
> EDIT: Also using CTranslate2 as an example is not great as it's actually a good showcase why ROCm is so far behind CUDA: It's all about adapting the tech and getting the popular libraries to support it. Things usually get implemented in CUDA first and then would need additional effort to add ROCm support that projects with low amount of (possibly hobbyist) maintainers might not have available. There's even an issue in CTranslate2 where they clearly state no-one is working to get ROCm supported in the library. ( https://github.com/OpenNMT/CTranslate2/issues/1072#issuecomm... )
I don't understand what you're saying here. It (along with the other projects I linked) are fantastic examples of just how far behind the ROCm ecosystem is. ROCm isn't even on the radar for most of them as your linked issue highlights.
Things always get implemented in CUDA first (ten years in this space and I've never seen ROCm first) and ROCm users either wait months (minimum) for sub-par performance or never get it at all.
[0] - https://github.com/guillaumekln/faster-whisper#benchmark
[1] - https://heywillow.io/components/willow-inference-server/#ben...
-
Open Source Libraries
guillaumekln/faster-whisper
-
Whisper Turbo: transcribe 20x faster than realtime using Rust and WebGPU
Neat to see a new implementation, although I'll note that for those looking for a drop-in replacement for the whisper library, I believe that both faster-whisper https://github.com/guillaumekln/faster-whisper and https://github.com/m-bain/whisperX are easier (PyTorch-based, doesn't require a web browser), and a lot faster (WhisperX is up to 70X realtime).
-
Whisper.api: An open source, self-hosted speech-to-text with fast transcription
One caveat here is that whisper.cpp does not offer any CUDA support at all, acceleration is only available for Apple Silicon.
If you have Nvidia hardware the ctranslate2 based faster-whisper is very very fast: https://github.com/guillaumekln/faster-whisper
What are some alternatives?
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
whisper.cpp - Port of OpenAI's Whisper model in C/C++
cheetah - On-device streaming speech-to-text engine powered by deep learning
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
kaldi-active-grammar - Python Kaldi speech recognition with grammars that can be set active/inactive dynamically at decode-time
stable-ts - Transcription, forced alignment, and audio indexing with OpenAI's Whisper
GassistPi - Google Assistant for Single Board Computers
whisper-diarization - Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper
mr-robot - A multi-utility discord bot. Playback hilarious voice tracks on-demand, wiki for anything, turn on/off IoT enabled devices, and more!
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
hollow-knight-voice-commands - A fun little python tool to play Hollow Knight with only voice commands
whisper-realtime - Whisper runs in realtime on a laptop GPU (8GB)