kaldi-active-grammar
dragonfly
Our great sponsors
kaldi-active-grammar | dragonfly | |
---|---|---|
10 | 17 | |
329 | 373 | |
- | 3.8% | |
0.0 | 7.5 | |
10 months ago | 3 days ago | |
Python | Python | |
GNU Affero General Public License v3.0 | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kaldi-active-grammar
-
Ask HN: How do you get started with adding voice commands to a computer system?
https://github.com/dictation-toolbox/dragonfly
https://github.com/daanzu/kaldi-active-grammar
- AMD Screws Gamers: Sponsorships Likely Block DLSS
- Software I’m Thankful For
-
Why, in 2022, is there no high quality method for voice control of a PC?
With an open system/engine, you can train your own personal speech model. For kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), you can do so without all that much difficulty, although the process/documentation could certainly use improvement.
I bootstrapped my personal speech model by retaining the commands from me using WSR. My voice is quite abnormal, and it took only 10 hours of speech data to train a model orders of magnitude more accurate than any generic model I've ever used. And of course, I retain much of my usage now with Kaldi, so my model improves more and more over time. A virtuous flywheel!
-
Ask HN: Anyone voice code? I had a stroke and can't use my left side
I have been coding entirely by voice for approximately 10 years now (by hand long before that). Most of that time I have been using the Dragonfly (https://github.com/dictation-toolbox/dragonfly) library to construct my own customized voice coding system. The library is highly flexible and open source, allowing you to easily customize everything to suit what you need to be productive. It is perhaps the power user analogue to Dragon Naturally Speaking. With it, you can certainly be highly productive coding by voice. In fact, I develop kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), a free and open source speech recognition backend usable by Dragonfly, itself entirely by voice. There's also a community of voice coders using Dragonfly and other tools that build on top of it, such as Caster (https://github.com/dictation-toolbox/Caster).
-
Ask HN: Who Wants to Collaborate?
- Demo: https://www.youtube.com/watch?v=Qk1mGbIJx3s / Software: https://github.com/daanzu/kaldi-active-grammar
Far field audio is usually harder for any speech system to get correct, so having a good quality mic and using it nearby will _usually_ help with the transcription quality. As a long time Linux user, I would love to see it get some more powerful voice tools - really hope that this opens up over the next few years. Feel free to drop me an email (on my profile) happy to help with setup on any of the above.
- How can I make Mycroft recognize non verbal audio sounds to command it?
- Linux Voice recognition/dictation/voice assistant/ one handed operation?
-
Disabled computer science student ISO advice about single-handed keyboards
kaldi repo: https://github.com/daanzu/kaldi-active-grammar
dragonfly
- Ways to make gaming less painful?
-
Seamless: Meta's New Speech Models
https://github.com/dictation-toolbox/dragonfly
-
Ask HN: How do you get started with adding voice commands to a computer system?
https://github.com/dictation-toolbox/dragonfly
https://github.com/daanzu/kaldi-active-grammar
-
If you're interested in eye-tracking, I'm interested in funding you
As someone who suffered some severe mobility impairment a few years ago and relied extensively on eye tracking for just over a year, https://precisiongazemouse.org/ (Windows) and https://talonvoice.com/ (multiplatform) are great. In my experience the hardware is already surprisingly good, in that you get accuracy to within an inch or half an inch depending on your training. Rather, it's all about the UX wrapped around it, as a few other comments have raised.
IMO Talon wins* for that by supporting voice recognition and mouth noises (think lip popping), which are less fatiguing than one-eye blinks for common actions like clicking. The creator is active here sometimes.
(* An alternative is to roll your own sort of thing with https://github.com/dictation-toolbox/dragonfly and other tools as I did, but it's a lot more effort)
-
Ask HN: Would you recommend OpenAI Whisper for Speech to text?
I've experimented with whisper. I don't know of a way to do commands without parsing dictation. Bottom line, the model has to pass 30 seconds of audio to my knowledge. So say if you're utterance is 5 seconds, you'll need 25 seconds of silence.
Depending on the platform you're targeting.
https://github.com/dictation-toolbox/dragonfly
- Software I’m Thankful For
- Whisper – open source speech recognition by OpenAI
-
Found out I have an enchondroma tumour in my hand & it's impacting my typing abilities
What you don't have years of experience typing one handed? Oh well you'll become an expert now. Ive seen this tool used to program python with dragon naturally speaking, maybe give it a go... https://github.com/dictation-toolbox/dragonfly
-
Ask HN: Anyone voice code? I had a stroke and can't use my left side
I have been coding entirely by voice for approximately 10 years now (by hand long before that). Most of that time I have been using the Dragonfly (https://github.com/dictation-toolbox/dragonfly) library to construct my own customized voice coding system. The library is highly flexible and open source, allowing you to easily customize everything to suit what you need to be productive. It is perhaps the power user analogue to Dragon Naturally Speaking. With it, you can certainly be highly productive coding by voice. In fact, I develop kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), a free and open source speech recognition backend usable by Dragonfly, itself entirely by voice. There's also a community of voice coders using Dragonfly and other tools that build on top of it, such as Caster (https://github.com/dictation-toolbox/Caster).
- Ask HN: Who Wants to Collaborate?
What are some alternatives?
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
community - Voice command set for Talon, community-supported.
nerd-dictation - Simple, hackable offline speech to text - using the VOSK-API.
Caster - Dragonfly-Based Voice Programming and Accessibility Toolkit
pocketsphinx-python - Python interface to CMU Sphinxbase and Pocketsphinx libraries
Diverse-Stardew-Valley
mycroft-precise - A lightweight, simple-to-use, RNN wake word listener
crkbd - Corne keyboard, a split keyboard with 3x6 column staggered keys and 3 thumb keys.
openai-whisper-realtime - A quick experiment to achieve almost realtime transcription using Whisper.
Common-Voice - Audio Classification with machine learning
helix - A compact split ortholinear keyboard.