kaldi-active-grammar
whisper-writer
Our great sponsors
kaldi-active-grammar | whisper-writer | |
---|---|---|
10 | 2 | |
329 | 184 | |
- | - | |
0.0 | 6.9 | |
10 months ago | about 1 month ago | |
Python | Python | |
GNU Affero General Public License v3.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kaldi-active-grammar
-
Ask HN: How do you get started with adding voice commands to a computer system?
https://github.com/dictation-toolbox/dragonfly
https://github.com/daanzu/kaldi-active-grammar
- AMD Screws Gamers: Sponsorships Likely Block DLSS
- Software Iβm Thankful For
-
Why, in 2022, is there no high quality method for voice control of a PC?
With an open system/engine, you can train your own personal speech model. For kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), you can do so without all that much difficulty, although the process/documentation could certainly use improvement.
I bootstrapped my personal speech model by retaining the commands from me using WSR. My voice is quite abnormal, and it took only 10 hours of speech data to train a model orders of magnitude more accurate than any generic model I've ever used. And of course, I retain much of my usage now with Kaldi, so my model improves more and more over time. A virtuous flywheel!
-
Ask HN: Anyone voice code? I had a stroke and can't use my left side
I have been coding entirely by voice for approximately 10 years now (by hand long before that). Most of that time I have been using the Dragonfly (https://github.com/dictation-toolbox/dragonfly) library to construct my own customized voice coding system. The library is highly flexible and open source, allowing you to easily customize everything to suit what you need to be productive. It is perhaps the power user analogue to Dragon Naturally Speaking. With it, you can certainly be highly productive coding by voice. In fact, I develop kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), a free and open source speech recognition backend usable by Dragonfly, itself entirely by voice. There's also a community of voice coders using Dragonfly and other tools that build on top of it, such as Caster (https://github.com/dictation-toolbox/Caster).
-
Ask HN: Who Wants to Collaborate?
- Demo: https://www.youtube.com/watch?v=Qk1mGbIJx3s / Software: https://github.com/daanzu/kaldi-active-grammar
Far field audio is usually harder for any speech system to get correct, so having a good quality mic and using it nearby will _usually_ help with the transcription quality. As a long time Linux user, I would love to see it get some more powerful voice tools - really hope that this opens up over the next few years. Feel free to drop me an email (on my profile) happy to help with setup on any of the above.
- How can I make Mycroft recognize non verbal audio sounds to command it?
- Linux Voice recognition/dictation/voice assistant/ one handed operation?
-
Disabled computer science student ISO advice about single-handed keyboards
kaldi repo: https://github.com/daanzu/kaldi-active-grammar
whisper-writer
- Show HN: WhisperWriter β Speech-to-text using OpenAI's Whisper, coded by ChatGPT
-
Using ChatGPT to generate a GPT project end-to-end
I've also made six small apps completely coded by ChatGPT (with GitHub Copilot contributing a bit as well). Here are the two largest:
PlaylistGPT (https://github.com/savbell/playlist-gpt): A fun little web app that allows you to ask questions about your Spotify playlists and receive answers from Python code generated by OpenAI's models. I even added a feature where if the code written by GPT runs into errors, it can send the code and the error back to the model and ask it to fix it. It actually can debug itself quite often! One of the most impressive things for me was how it was able to model the UI after the Spotify app with little more than me asking it to do exactly that.
WhisperWriter (https://github.com/savbell/whisper-writer): A small speech-to-text app that uses OpenAI's Whisper API to auto-transcribe recordings from a user's microphone. It waits for a keyboard shortcut to be pressed, then records from the user's microphone until it detects a pause in their speech, and then types out the Whisper transcription to the active window. It only took me two hours to get a working prototype up and running, with additions such as graphic indicators taking a few more hours to implement.
I created the first for fun and the second to help me overcome a disability that impacts my ability to use a keyboard. I now use WhisperWriter literally every day (I'm even typing part of this comment with it), and I used it to prompt ChatGPT to write the code for a few additional personal projects that improve my quality-of-life in small ways. If people are interested, I may write up more about the prompting and pair programming process, since I definitely learned a lot as I worked through these, including some similar lessons to the article!
Personally, I am super excited about the possibilities these AI technologies open up for people like me, who may be facing small challenges that could be easily solved with a tiny app written in a few hours tailored specifically to their problem. I had been struggling to use my desktop computer because the Windows Dictation tool was very broken for me, but now I feel like I can use it to my full capacity again because I can type with WhisperWriter. Coding now takes a minimal amount of keyboard use thanks to these AI coding assistants -- and I am super grateful for that!
What are some alternatives?
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
WhisperLive - A nearly-live implementation of OpenAI's Whisper.
nerd-dictation - Simple, hackable offline speech to text - using the VOSK-API.
AI-Waifu-Vtuber - AI Vtuber for Streaming on Youtube/Twitch
pocketsphinx-python - Python interface to CMU Sphinxbase and Pocketsphinx libraries
playlist-gpt - πΆπ©βπ» A fun little web app that analyzes your Spotify playlists with help from OpenAI's language models.
mycroft-precise - A lightweight, simple-to-use, RNN wake word listener
easy-chat - A ChatGPT UI for young readers, written by ChatGPT
Caster - Dragonfly-Based Voice Programming and Accessibility Toolkit
whisper-openai-gradio-implementation - Whisper is an automatic speech recognition (ASR) system Gradio Web UI Implementation
dragonfly - Speech recognition framework allowing powerful Python-based scripting and extension of Dragon NaturallySpeaking (DNS), Windows Speech Recognition (WSR), Kaldi and CMU Pocket Sphinx
shorthanddictation - Dictation program, which uses the reading speed unit syllables per minute