kaldi-active-grammar
mycroft-precise
Our great sponsors
kaldi-active-grammar | mycroft-precise | |
---|---|---|
10 | 3 | |
329 | 797 | |
- | 2.6% | |
0.0 | 0.0 | |
10 months ago | 5 months ago | |
Python | Python | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kaldi-active-grammar
-
Ask HN: How do you get started with adding voice commands to a computer system?
https://github.com/dictation-toolbox/dragonfly
https://github.com/daanzu/kaldi-active-grammar
- AMD Screws Gamers: Sponsorships Likely Block DLSS
- Software I’m Thankful For
-
Why, in 2022, is there no high quality method for voice control of a PC?
With an open system/engine, you can train your own personal speech model. For kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), you can do so without all that much difficulty, although the process/documentation could certainly use improvement.
I bootstrapped my personal speech model by retaining the commands from me using WSR. My voice is quite abnormal, and it took only 10 hours of speech data to train a model orders of magnitude more accurate than any generic model I've ever used. And of course, I retain much of my usage now with Kaldi, so my model improves more and more over time. A virtuous flywheel!
-
Ask HN: Anyone voice code? I had a stroke and can't use my left side
I have been coding entirely by voice for approximately 10 years now (by hand long before that). Most of that time I have been using the Dragonfly (https://github.com/dictation-toolbox/dragonfly) library to construct my own customized voice coding system. The library is highly flexible and open source, allowing you to easily customize everything to suit what you need to be productive. It is perhaps the power user analogue to Dragon Naturally Speaking. With it, you can certainly be highly productive coding by voice. In fact, I develop kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), a free and open source speech recognition backend usable by Dragonfly, itself entirely by voice. There's also a community of voice coders using Dragonfly and other tools that build on top of it, such as Caster (https://github.com/dictation-toolbox/Caster).
-
Ask HN: Who Wants to Collaborate?
- Demo: https://www.youtube.com/watch?v=Qk1mGbIJx3s / Software: https://github.com/daanzu/kaldi-active-grammar
Far field audio is usually harder for any speech system to get correct, so having a good quality mic and using it nearby will _usually_ help with the transcription quality. As a long time Linux user, I would love to see it get some more powerful voice tools - really hope that this opens up over the next few years. Feel free to drop me an email (on my profile) happy to help with setup on any of the above.
- How can I make Mycroft recognize non verbal audio sounds to command it?
- Linux Voice recognition/dictation/voice assistant/ one handed operation?
-
Disabled computer science student ISO advice about single-handed keyboards
kaldi repo: https://github.com/daanzu/kaldi-active-grammar
mycroft-precise
-
Mycroft – open-source voice assistant
> It reliably responds to the wakeword ("hey Mycroft") from men, and only responds about 50% of the time to women.
They have instructions on how to train your own version of the wakeword listener.
https://github.com/MycroftAI/mycroft-precise#train-your-own-...
-
I'm working on a bot and could use your help!
The most important part of Astra is detecting when someone is speaking to her. This is done using a RNN (recurrent neural network) which is implemented by Mycroft's Precise. In order to use this, we need to collect voice data (from many people) of them saying Astra, potentially multiple times, and train a model using it. That's where you come in.
-
Is there a general purpose teachable "tone detection" sensor?
I've never tried it, but theoretically a wake word system like Mycroft Precise or Raven might not care too much whether your "wake word" is a jingle?
What are some alternatives?
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
Porcupine - On-device wake word detection powered by deep learning
nerd-dictation - Simple, hackable offline speech to text - using the VOSK-API.
react-native-spokestack - Spokestack: give your React Native app a voice interface!
pocketsphinx-python - Python interface to CMU Sphinxbase and Pocketsphinx libraries
pico-wake-word - MicroSpeech Wake Word example on the Raspberry Pi Pico. This is a port of the example on the TensorFlow repository.
Caster - Dragonfly-Based Voice Programming and Accessibility Toolkit
dragonfly - Speech recognition framework allowing powerful Python-based scripting and extension of Dragon NaturallySpeaking (DNS), Windows Speech Recognition (WSR), Kaldi and CMU Pocket Sphinx
spokestack-python - Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application with a focus on embedded systems.
Common-Voice - Audio Classification with machine learning
rhasspy-wake-raven - Wake word detection engine based on Snips Personal Wakeword Detector