npct
hyelicht
npct | hyelicht | |
---|---|---|
1 | 4 | |
7 | 145 | |
- | - | |
- | 2.5 | |
over 3 years ago | about 2 months ago | |
HTML | C++ | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
npct
-
Ask HN: What have you built with ESPHome, ESP8266 or similar hardware
Back during the pandemic, hardware-based contract tracers were an idea. I built one using the ESP32; see https://github.com/tbensky/npct. In a nutshell, everyone generates a (non-centralized) hash for themselves based on local entropy. This hash is set to the BLE name of the ESP32. Turn it on and throw it in your backpack as you go out. When two ESP32s pass by each other, they both log the other's BLE name (hence hash). Later on, hash logs could be inspected and uploaded to a central server so you can see who encountered who. Seems like there's still some (non-Covid) applications for this (but I can't think of any). Fun project. Learned a lot about Bluetooth.
hyelicht
-
Ask HN: What have you built with ESPHome, ESP8266 or similar hardware
My goals to release source and docs a la https://github.com/eikehein/hyelicht got waylaid by the ultimate DIY project of having a baby in November, but I will try to get it done this year!
-
The Broadway Windowing System
Qt supports this too:
https://doc.qt.io/qt-5/webgl.html
I've used this to let friends on IRC paint on my LED shelf (https://github.com/eikehein/hyelicht), which has a Qt-based embedded GUI, over the internet. Cheap fun!
-
The IKEA-powered homelab on a wall
IKEA hacks, of course! https://github.com/eikehein/hyelicht/
-
Apple is reportedly spending ‘millions of dollars a day’ training AI
7. Add the sensor event and memory system described above
There's a few other tricks. To improve the audio capture, I take note of spatially where the hot word is detected (i.e. which mic in the array gets the best signal) and then capture the rest & perform the silence detection with a corresponding bias.
This is actually done in a distributed fashion over the network, so if two of the AI speakers hear the same command, only one of them will end up processing it.
They end up making mainly HTTP calls to APIs that already exist around my house. I have a second RasPi in my LED shelf (another old project, https://github.com/eikehein/hyelicht/) that doubles as a Philips Hue bridge with a zigbee dongle. That's what the DIY AI speakers interact with when making changes to the lighting.
I will say: Depending on the user command and the weather in the cloud, it's pretty slow. I've tried my best to optimize the client side for perceived user latency, but there's no way around the GPT-4 API just being pretty slow. And 3.5-turbo just doesn't cut it for what I'm trying to do.
I'd like to get all of this of the cloud entirely. I predict the next generation of my home NAS will have a GPU in it and try to run things like fine-tuned llama2 for the home.