nixpkgs-esp-dev
esp-sr
nixpkgs-esp-dev | esp-sr | |
---|---|---|
1 | 4 | |
97 | 474 | |
- | 3.8% | |
6.7 | 8.5 | |
6 days ago | 10 days ago | |
Nix | C | |
Creative Commons Zero v1.0 Universal | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nixpkgs-esp-dev
-
Show HN: Willow – Open-Source Privacy-Focused Voice Assistant Hardware
If you are open to Nix, you can try https://github.com/mirrexagon/nixpkgs-esp-dev. I used it for a small project a while ago and the experience was pretty good.
esp-sr
-
Testing Speech Recognition(Voice User Interface), like "Hey, Siri", "OK, Google"
If it is the right thing, yes: https://github.com/espressif/esp-sr
- ESP-Skainet Test (Speech Commands Recognition)
-
Show HN: Willow – Open-Source Privacy-Focused Voice Assistant Hardware
For wake word and voice activity detection, audio processing, etc we use the ESP SR (speech recognition) framework from Espressif[0].
For speech to text there are two options and more to come:
1) Completely on device command recognition using the ESP SR Multinet 6 model. Willow will (currently) pull your light and switch entities from Home Assistant and generate the grammar and command definition required by Multinet. We want to develop a Willow Home Assistant component that will provide tighter Willow integration with HA and allow users to do this point and click with dynamic updates for new/changed entities, different kinds of entities, etc all in the HA dashboard/config.
The only "issue" with Multinet is that it only supports 400 defined commands. You're not going to get something like "What's the weather like in $CITY?" out of it.
For that we have:
2-?) Our own highly optimized inference server using Whisper, LLamA/Vicuna, and Speecht5 from transformers (more to come soon). We're open sourcing it next week. Willow streams audio after wake in realtime, gets the STT output, and sends it wherever you want. With the Willow Home Assistant component (doesn't exist yet) it will sit in between our inference server implementation doing STT/TTS or any other STT/TTS implementation supported by Home Assistant and handle all of this for you.
[0] - https://github.com/espressif/esp-sr
-
Has anyone made a custom wake word for ESP-skainet?
I don't believe that's possible. Seems like a pretty delicate process in the readme...
What are some alternatives?
piper - A fast, local neural text to speech system
willow - Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative
noise - Go implementation of the Noise Protocol Framework
esp-web-tools - Open source tools to allow working with ESP devices in the browser
esp-box - The ESP-BOX is a new generation AIoT development platform released by Espressif Systems.