silero-vad
kaldi-active-grammar
silero-vad | kaldi-active-grammar | |
---|---|---|
10 | 10 | |
2,866 | 329 | |
- | - | |
6.9 | 0.0 | |
10 days ago | 10 months ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
silero-vad
-
New models and developer products announced at OpenAI DevDay
>How do you detect speech starting and stopping?
https://github.com/snakers4/silero-vad
-
[Discussion] Video Translation Task
you could look into https://github.com/guillaumekln/faster-whisper especially the VAD section (Voice Activity Detector) using https://github.com/snakers4/silero-vad
-
Using Whisper to transcribe the entire Forensic Files series
I also had the same synchronization issue, so I wrote a WebUI/CLI that uses Silero-VAD that first splits the audio whenever there a silent portion (or every 30 seconds), and I haven't experienced it since:
-
Whisper - A new free AI model from OpenAI that can transcribe Japanese (and many other languages) at up to "human level" accuracy
By the way, I've updated the WebUI to now also support using Silero VAD to break up the audio into distinct sections, and run Whisper on each section and then combine them into one single transcript/SRT file.
-
[P] A more detailed post about Silero VAD on The Gradient
The VAD is always available on Github
- Silero VAD: pre-trained enterprise-grade voice activity detector
-
[P] Silero VAD: One voice detector to rule them all
I also pinned some interesting comments here regarding mobile and IOT usage here - https://github.com/snakers4/silero-vad/issues/37
- One voice detector to rule them all
kaldi-active-grammar
-
Ask HN: How do you get started with adding voice commands to a computer system?
https://github.com/dictation-toolbox/dragonfly
https://github.com/daanzu/kaldi-active-grammar
- AMD Screws Gamers: Sponsorships Likely Block DLSS
- Software I’m Thankful For
-
Why, in 2022, is there no high quality method for voice control of a PC?
With an open system/engine, you can train your own personal speech model. For kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), you can do so without all that much difficulty, although the process/documentation could certainly use improvement.
I bootstrapped my personal speech model by retaining the commands from me using WSR. My voice is quite abnormal, and it took only 10 hours of speech data to train a model orders of magnitude more accurate than any generic model I've ever used. And of course, I retain much of my usage now with Kaldi, so my model improves more and more over time. A virtuous flywheel!
-
Ask HN: Anyone voice code? I had a stroke and can't use my left side
I have been coding entirely by voice for approximately 10 years now (by hand long before that). Most of that time I have been using the Dragonfly (https://github.com/dictation-toolbox/dragonfly) library to construct my own customized voice coding system. The library is highly flexible and open source, allowing you to easily customize everything to suit what you need to be productive. It is perhaps the power user analogue to Dragon Naturally Speaking. With it, you can certainly be highly productive coding by voice. In fact, I develop kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), a free and open source speech recognition backend usable by Dragonfly, itself entirely by voice. There's also a community of voice coders using Dragonfly and other tools that build on top of it, such as Caster (https://github.com/dictation-toolbox/Caster).
-
Ask HN: Who Wants to Collaborate?
- Demo: https://www.youtube.com/watch?v=Qk1mGbIJx3s / Software: https://github.com/daanzu/kaldi-active-grammar
Far field audio is usually harder for any speech system to get correct, so having a good quality mic and using it nearby will _usually_ help with the transcription quality. As a long time Linux user, I would love to see it get some more powerful voice tools - really hope that this opens up over the next few years. Feel free to drop me an email (on my profile) happy to help with setup on any of the above.
- How can I make Mycroft recognize non verbal audio sounds to command it?
- Linux Voice recognition/dictation/voice assistant/ one handed operation?
-
Disabled computer science student ISO advice about single-handed keyboards
kaldi repo: https://github.com/daanzu/kaldi-active-grammar
What are some alternatives?
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
nerd-dictation - Simple, hackable offline speech to text - using the VOSK-API.
cheetah - On-device streaming speech-to-text engine powered by deep learning
pocketsphinx-python - Python interface to CMU Sphinxbase and Pocketsphinx libraries
GassistPi - Google Assistant for Single Board Computers
mycroft-precise - A lightweight, simple-to-use, RNN wake word listener
mr-robot - A multi-utility discord bot. Playback hilarious voice tracks on-demand, wiki for anything, turn on/off IoT enabled devices, and more!
Caster - Dragonfly-Based Voice Programming and Accessibility Toolkit
hollow-knight-voice-commands - A fun little python tool to play Hollow Knight with only voice commands
dragonfly - Speech recognition framework allowing powerful Python-based scripting and extension of Dragon NaturallySpeaking (DNS), Windows Speech Recognition (WSR), Kaldi and CMU Pocket Sphinx
Common-Voice - Audio Classification with machine learning