dragonfly
openai-whisper-realtime
dragonfly | openai-whisper-realtime | |
---|---|---|
17 | 1 | |
373 | 180 | |
1.6% | - | |
7.5 | 10.0 | |
11 days ago | over 1 year ago | |
Python | Python | |
GNU Lesser General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dragonfly
- Ways to make gaming less painful?
-
Seamless: Meta's New Speech Models
https://github.com/dictation-toolbox/dragonfly
-
Ask HN: How do you get started with adding voice commands to a computer system?
https://github.com/dictation-toolbox/dragonfly
https://github.com/daanzu/kaldi-active-grammar
-
If you're interested in eye-tracking, I'm interested in funding you
As someone who suffered some severe mobility impairment a few years ago and relied extensively on eye tracking for just over a year, https://precisiongazemouse.org/ (Windows) and https://talonvoice.com/ (multiplatform) are great. In my experience the hardware is already surprisingly good, in that you get accuracy to within an inch or half an inch depending on your training. Rather, it's all about the UX wrapped around it, as a few other comments have raised.
IMO Talon wins* for that by supporting voice recognition and mouth noises (think lip popping), which are less fatiguing than one-eye blinks for common actions like clicking. The creator is active here sometimes.
(* An alternative is to roll your own sort of thing with https://github.com/dictation-toolbox/dragonfly and other tools as I did, but it's a lot more effort)
-
Ask HN: Would you recommend OpenAI Whisper for Speech to text?
I've experimented with whisper. I don't know of a way to do commands without parsing dictation. Bottom line, the model has to pass 30 seconds of audio to my knowledge. So say if you're utterance is 5 seconds, you'll need 25 seconds of silence.
Depending on the platform you're targeting.
https://github.com/dictation-toolbox/dragonfly
- Software Iām Thankful For
- Whisper ā open source speech recognition by OpenAI
-
Found out I have an enchondroma tumour in my hand & it's impacting my typing abilities
What you don't have years of experience typing one handed? Oh well you'll become an expert now. Ive seen this tool used to program python with dragon naturally speaking, maybe give it a go... https://github.com/dictation-toolbox/dragonfly
-
Ask HN: Anyone voice code? I had a stroke and can't use my left side
I have been coding entirely by voice for approximately 10 years now (by hand long before that). Most of that time I have been using the Dragonfly (https://github.com/dictation-toolbox/dragonfly) library to construct my own customized voice coding system. The library is highly flexible and open source, allowing you to easily customize everything to suit what you need to be productive. It is perhaps the power user analogue to Dragon Naturally Speaking. With it, you can certainly be highly productive coding by voice. In fact, I develop kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), a free and open source speech recognition backend usable by Dragonfly, itself entirely by voice. There's also a community of voice coders using Dragonfly and other tools that build on top of it, such as Caster (https://github.com/dictation-toolbox/Caster).
- Ask HN: Who Wants to Collaborate?
openai-whisper-realtime
-
Whisper ā open source speech recognition by OpenAI
I tried running it in realtime with live audio input (kind of).
You can find the python script in this repo: https://github.com/tobiashuttinger/openai-whisper-realtime
What are some alternatives?
community - Voice command set for Talon, community-supported.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
kaldi-active-grammar - Python Kaldi speech recognition with grammars that can be set active/inactive dynamically at decode-time
DeepSpeech-examples - Examples of how to use or integrate DeepSpeech
Caster - Dragonfly-Based Voice Programming and Accessibility Toolkit
mycroft-core - Mycroft Core, the Mycroft Artificial Intelligence platform.
Diverse-Stardew-Valley
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
crkbd - Corne keyboard, a split keyboard with 3x6 column staggered keys and 3 thumb keys.
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
helix - A compact split ortholinear keyboard.
py-webrtcvad - Python interface to the WebRTC Voice Activity Detector