If you're interested in eye-tracking, I'm interested in funding you

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB high-performance time series database
Collect, organize, and act on massive volumes of high-resolution data to power real-time intelligent systems.
influxdata.com
featured
CodeRabbit: AI Code Reviews for Developers
Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
coderabbit.ai
featured
  1. Brain

    Arduino library for reading Neurosky EEG brainwave data. (Tested with the MindFlex and Force Trainer toys.) (by kitschpatrol)

  2. InfluxDB

    InfluxDB high-performance time series database. Collect, organize, and act on massive volumes of high-resolution data to power real-time intelligent systems.

    InfluxDB logo
  3. eyewriter

    open source eye tracking and drawing toolkit

  4. DeepMenu

    I've been working on the menuing side [1] based on crossing Fitt's Law with Huffman trees. But, don't know the constraints for ALS.

    Hopefully, whomever takes this on doesn't take the standard Accessibility approach, which is adding an extra layer of complexity on an existing UI.

    A good friend, Gordon Fuller, found out he was going blind. So, he co-founded one of the first VR startups in the 90's. Why? For wayfinding.

    What we came up with a concept of Universal design. Start over from first principles. Seeing Gordon use an Accessible UI is painful to watch, it takes three times as many steps to navigate and confirm. So, what is the factor? 0.3 X?

    Imagine if we could refactor all apps with a LLM, and then couple it with an auto compete menu. Within that menu is personal history of all your past transversals.

    What would be the result? A 10X? Would my sister in a wheelchair be able to use it? Would love to find out!

    [1] https://github.com/musesum/DeepMenu

  5. bootl-attacks

  6. adequate-can

  7. rizin

    UNIX-like reverse engineering framework and command-line toolset.

  8. cutter

    Free and Open Source Reverse Engineering Platform powered by rizin

  9. CodeRabbit

    CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.

    CodeRabbit logo
  10. lmms

    Cross-platform music production software

  11. dragonfly

    Speech recognition framework allowing powerful Python-based scripting and extension of Dragon NaturallySpeaking (DNS), Windows Speech Recognition (WSR), Kaldi and CMU Pocket Sphinx (by dictation-toolbox)

    As someone who suffered some severe mobility impairment a few years ago and relied extensively on eye tracking for just over a year, https://precisiongazemouse.org/ (Windows) and https://talonvoice.com/ (multiplatform) are great. In my experience the hardware is already surprisingly good, in that you get accuracy to within an inch or half an inch depending on your training. Rather, it's all about the UX wrapped around it, as a few other comments have raised.

    IMO Talon wins* for that by supporting voice recognition and mouth noises (think lip popping), which are less fatiguing than one-eye blinks for common actions like clicking. The creator is active here sometimes.

    (* An alternative is to roll your own sort of thing with https://github.com/dictation-toolbox/dragonfly and other tools as I did, but it's a lot more effort)

  12. PrecisionGazeMouse

    Precisely move your mouse by gazing at a point on the screen or by moving your head

    As someone who suffered some severe mobility impairment a few years ago and relied extensively on eye tracking for just over a year, https://precisiongazemouse.org/ (Windows) and https://talonvoice.com/ (multiplatform) are great. In my experience the hardware is already surprisingly good, in that you get accuracy to within an inch or half an inch depending on your training. Rather, it's all about the UX wrapped around it, as a few other comments have raised.

    IMO Talon wins* for that by supporting voice recognition and mouth noises (think lip popping), which are less fatiguing than one-eye blinks for common actions like clicking. The creator is active here sometimes.

    (* An alternative is to roll your own sort of thing with https://github.com/dictation-toolbox/dragonfly and other tools as I did, but it's a lot more effort)

  13. BlinkPoseAndSwipeiOSMLKit

    A Tinder Swiping Card Prototype that does touch less swipe, using face gestures.

    Does something like a blink, wink tracker for swiping UI pique your interest? I built a PoC a while back: https://github.com/anupamchugh/BlinkPoseAndSwipeiOSMLKit

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

Did you know that C++ is
the 7th most popular programming language
based on number of references?