Caster
kaldi-active-grammar
Caster | kaldi-active-grammar | |
---|---|---|
7 | 10 | |
329 | 329 | |
0.6% | - | |
2.9 | 0.0 | |
about 1 month ago | 10 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Caster
- Ask HN: I'm disabled and out of money. Now what?
- Is there a Foundry VTT module that helps people who have difficulty moving their hands and fingers?
- Dragonfly-Based Voice Programming and Accessibility Toolkit
-
Ask HN: Who Wants to Collaborate?
Unfortunately Dragon development has mostly stalled for the last 5 years (Dragon 15 was a leap forward but that was quite some time ago now).
You can still make use of it via Dragonfly (see also Caster[0]) as mentioned by a sibling comment or by using Talon[1] or Vocola.
Having used a computer 90% hands free for about a year and a half back in 2019, I chose Dragonfly then, but would probably choose Talon nowadays - less futsing about and it has alternative speech engine options.
I also recommend looking into eye tracking: the Tobii gaming products[2] work well for general computer mousing with some software like Talon or Precision Gaze[3] - well enough for me to make a hands free mod[4] for Factorio, for example.
[0]: https://github.com/dictation-toolbox/Caster
- How can I make Mycroft recognize non verbal audio sounds to command it?
- Linux Voice recognition/dictation/voice assistant/ one handed operation?
-
Any programmers using dictation?
so I found this thing called Caster today that miiight save my job. it does allow you to format code with Dragon and navigate VS Code (albeit poorly.) It's also open-source, so you can add features.
kaldi-active-grammar
-
Ask HN: How do you get started with adding voice commands to a computer system?
https://github.com/dictation-toolbox/dragonfly
https://github.com/daanzu/kaldi-active-grammar
- AMD Screws Gamers: Sponsorships Likely Block DLSS
- Software Iām Thankful For
-
Why, in 2022, is there no high quality method for voice control of a PC?
With an open system/engine, you can train your own personal speech model. For kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), you can do so without all that much difficulty, although the process/documentation could certainly use improvement.
I bootstrapped my personal speech model by retaining the commands from me using WSR. My voice is quite abnormal, and it took only 10 hours of speech data to train a model orders of magnitude more accurate than any generic model I've ever used. And of course, I retain much of my usage now with Kaldi, so my model improves more and more over time. A virtuous flywheel!
-
Ask HN: Anyone voice code? I had a stroke and can't use my left side
I have been coding entirely by voice for approximately 10 years now (by hand long before that). Most of that time I have been using the Dragonfly (https://github.com/dictation-toolbox/dragonfly) library to construct my own customized voice coding system. The library is highly flexible and open source, allowing you to easily customize everything to suit what you need to be productive. It is perhaps the power user analogue to Dragon Naturally Speaking. With it, you can certainly be highly productive coding by voice. In fact, I develop kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), a free and open source speech recognition backend usable by Dragonfly, itself entirely by voice. There's also a community of voice coders using Dragonfly and other tools that build on top of it, such as Caster (https://github.com/dictation-toolbox/Caster).
-
Ask HN: Who Wants to Collaborate?
- Demo: https://www.youtube.com/watch?v=Qk1mGbIJx3s / Software: https://github.com/daanzu/kaldi-active-grammar
Far field audio is usually harder for any speech system to get correct, so having a good quality mic and using it nearby will _usually_ help with the transcription quality. As a long time Linux user, I would love to see it get some more powerful voice tools - really hope that this opens up over the next few years. Feel free to drop me an email (on my profile) happy to help with setup on any of the above.
- How can I make Mycroft recognize non verbal audio sounds to command it?
- Linux Voice recognition/dictation/voice assistant/ one handed operation?
-
Disabled computer science student ISO advice about single-handed keyboards
kaldi repo: https://github.com/daanzu/kaldi-active-grammar
What are some alternatives?
dragonfly - Speech recognition framework allowing powerful Python-based scripting and extension of Dragon NaturallySpeaking (DNS), Windows Speech Recognition (WSR), Kaldi and CMU Pocket Sphinx
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
voice_datasets - š A comprehensive list of open-source datasets for voice and sound computing (95+ datasets).
nerd-dictation - Simple, hackable offline speech to text - using the VOSK-API.
pocketsphinx-python - Python interface to CMU Sphinxbase and Pocketsphinx libraries
rhino - Rhino is an open-source implementation of JavaScript written entirely in Java
mycroft-precise - A lightweight, simple-to-use, RNN wake word listener
Common-Voice - Audio Classification with machine learning
talk2windows - Add voice commands to control the Windows 10+ desktop.