DeepMenu
dragonfly
DeepMenu | dragonfly | |
---|---|---|
3 | 17 | |
4 | 373 | |
- | 1.6% | |
5.1 | 7.5 | |
29 days ago | 4 days ago | |
Swift | Python | |
- | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepMenu
-
If you're interested in eye-tracking, I'm interested in funding you
I've been working on the menuing side [1] based on crossing Fitt's Law with Huffman trees. But, don't know the constraints for ALS.
Hopefully, whomever takes this on doesn't take the standard Accessibility approach, which is adding an extra layer of complexity on an existing UI.
A good friend, Gordon Fuller, found out he was going blind. So, he co-founded one of the first VR startups in the 90's. Why? For wayfinding.
What we came up with a concept of Universal design. Start over from first principles. Seeing Gordon use an Accessible UI is painful to watch, it takes three times as many steps to navigate and confirm. So, what is the factor? 0.3 X?
Imagine if we could refactor all apps with a LLM, and then couple it with an auto compete menu. Within that menu is personal history of all your past transversals.
What would be the result? A 10X? Would my sister in a wheelchair be able to use it? Would love to find out!
[1] https://github.com/musesum/DeepMenu
- Ask HN: Side projects that are making money, but you'd not talk about them?
-
SwiftUI in 2022
Am experimenting with a new menuing idiom with SwiftUI [1]. It is both delightful and infuriating. Am delighted by autolayout. But, way too much semantics for observers. To simplify, I intentionally avoided structs, for view models, and resorted to classes.
I hope to create a package and use it for metal based visual music synthesizer. But, those complaints about playing nice with UIKit? Am rather worried.
[1] https://github.com/musesum/DeepMenu
dragonfly
- Ways to make gaming less painful?
-
Seamless: Meta's New Speech Models
https://github.com/dictation-toolbox/dragonfly
-
Ask HN: How do you get started with adding voice commands to a computer system?
https://github.com/dictation-toolbox/dragonfly
https://github.com/daanzu/kaldi-active-grammar
-
If you're interested in eye-tracking, I'm interested in funding you
As someone who suffered some severe mobility impairment a few years ago and relied extensively on eye tracking for just over a year, https://precisiongazemouse.org/ (Windows) and https://talonvoice.com/ (multiplatform) are great. In my experience the hardware is already surprisingly good, in that you get accuracy to within an inch or half an inch depending on your training. Rather, it's all about the UX wrapped around it, as a few other comments have raised.
IMO Talon wins* for that by supporting voice recognition and mouth noises (think lip popping), which are less fatiguing than one-eye blinks for common actions like clicking. The creator is active here sometimes.
(* An alternative is to roll your own sort of thing with https://github.com/dictation-toolbox/dragonfly and other tools as I did, but it's a lot more effort)
-
Ask HN: Would you recommend OpenAI Whisper for Speech to text?
I've experimented with whisper. I don't know of a way to do commands without parsing dictation. Bottom line, the model has to pass 30 seconds of audio to my knowledge. So say if you're utterance is 5 seconds, you'll need 25 seconds of silence.
Depending on the platform you're targeting.
https://github.com/dictation-toolbox/dragonfly
- Software I’m Thankful For
- Whisper – open source speech recognition by OpenAI
-
Found out I have an enchondroma tumour in my hand & it's impacting my typing abilities
What you don't have years of experience typing one handed? Oh well you'll become an expert now. Ive seen this tool used to program python with dragon naturally speaking, maybe give it a go... https://github.com/dictation-toolbox/dragonfly
-
Ask HN: Anyone voice code? I had a stroke and can't use my left side
I have been coding entirely by voice for approximately 10 years now (by hand long before that). Most of that time I have been using the Dragonfly (https://github.com/dictation-toolbox/dragonfly) library to construct my own customized voice coding system. The library is highly flexible and open source, allowing you to easily customize everything to suit what you need to be productive. It is perhaps the power user analogue to Dragon Naturally Speaking. With it, you can certainly be highly productive coding by voice. In fact, I develop kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), a free and open source speech recognition backend usable by Dragonfly, itself entirely by voice. There's also a community of voice coders using Dragonfly and other tools that build on top of it, such as Caster (https://github.com/dictation-toolbox/Caster).
- Ask HN: Who Wants to Collaborate?
What are some alternatives?
swift-async-algorithms - Async Algorithms for Swift
community - Voice command set for Talon, community-supported.
BreadBuddy - Recipe scheduler for iOS
kaldi-active-grammar - Python Kaldi speech recognition with grammars that can be set active/inactive dynamically at decode-time
PrecisionGazeMouse - Precisely move your mouse by gazing at a point on the screen or by moving your head
Caster - Dragonfly-Based Voice Programming and Accessibility Toolkit
PDFEncrypt - A C# application to encrypt existing PDF documents
Diverse-Stardew-Valley
adequate-can
crkbd - Corne keyboard, a split keyboard with 3x6 column staggered keys and 3 thumb keys.
RIBs - Uber's cross-platform mobile architecture framework.
openai-whisper-realtime - A quick experiment to achieve almost realtime transcription using Whisper.