matrix-fire
hyelicht
matrix-fire | hyelicht | |
---|---|---|
1 | 4 | |
5 | 145 | |
- | - | |
7.4 | 2.5 | |
about 2 months ago | about 2 months ago | |
Rust | C++ | |
- | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
matrix-fire
hyelicht
-
Ask HN: What have you built with ESPHome, ESP8266 or similar hardware
My goals to release source and docs a la https://github.com/eikehein/hyelicht got waylaid by the ultimate DIY project of having a baby in November, but I will try to get it done this year!
-
The Broadway Windowing System
Qt supports this too:
https://doc.qt.io/qt-5/webgl.html
I've used this to let friends on IRC paint on my LED shelf (https://github.com/eikehein/hyelicht), which has a Qt-based embedded GUI, over the internet. Cheap fun!
-
The IKEA-powered homelab on a wall
IKEA hacks, of course! https://github.com/eikehein/hyelicht/
-
Apple is reportedly spending ‘millions of dollars a day’ training AI
7. Add the sensor event and memory system described above
There's a few other tricks. To improve the audio capture, I take note of spatially where the hot word is detected (i.e. which mic in the array gets the best signal) and then capture the rest & perform the silence detection with a corresponding bias.
This is actually done in a distributed fashion over the network, so if two of the AI speakers hear the same command, only one of them will end up processing it.
They end up making mainly HTTP calls to APIs that already exist around my house. I have a second RasPi in my LED shelf (another old project, https://github.com/eikehein/hyelicht/) that doubles as a Philips Hue bridge with a zigbee dongle. That's what the DIY AI speakers interact with when making changes to the lighting.
I will say: Depending on the user command and the weather in the cloud, it's pretty slow. I've tried my best to optimize the client side for perceived user latency, but there's no way around the GPT-4 API just being pretty slow. And 3.5-turbo just doesn't cut it for what I'm trying to do.
I'd like to get all of this of the cloud entirely. I predict the next generation of my home NAS will have a GPU in it and try to run things like fine-tuned llama2 for the home.