TinyML: Ultra-low power Machine Learning

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • voice-controlled-robot

    A voice-controlled robot using the ESP32 and TensorFlow Lite

  • This my voice controlled robot: https://github.com/atomic14/voice-controlled-robot

    It does left, right, forward and backward. That was pretty much all I could fit in the model.

    And here’s wake word detection: https://github.com/atomic14/diy-alexa

    It does local wake word detection on device.

  • diy-alexa

    DIY Alexa

  • This my voice controlled robot: https://github.com/atomic14/voice-controlled-robot

    It does left, right, forward and backward. That was pretty much all I could fit in the model.

    And here’s wake word detection: https://github.com/atomic14/diy-alexa

    It does local wake word detection on device.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • microflow-rs

    A Rust TinyML compiler for neural network inference on embedded systems

  • I built a Rust TinyML compiler for my master thesis project: https://github.com/matteocarnelos/microflow-rs

    It uses Rust procedural macros to evaluate the model at compile time and create a predict() function that performs inference on the given model. By doing so, I was able to strip down the binary way more than TensorFlow Lite for Microcontrollers and other engines. I even managed to run a speech command recognizer (TinyConv) on an 8-bit ATmega328 (Arduino Uno).

  • esp-nn

    Optimised Neural Network functions for Espressif chipsets

  • There are a range of ML acceleration possible on existing chips. The basic 4-wide 8 bit integer SIMD extensions in NEON is available on basically all ARM Cortex M4F chips, which is already available 8+ years. It gives 4-5x speedup for neural networks.

    The more recent ESP32-S3 has operations with up to 10x speedup, see https://github.com/espressif/esp-nn

    Then there are RISCV chips with neural network co processors like Kendryte K210.

    ARM has also defined a new set of extensions for NN acceleration, and reference designs for cores being ARM Cortex M85. Chips are becoming available this year.

  • Arm-Helium-Technology

    A reference book on M-Profile Vector Extensions (MVE) for Arm Cortex-M Processors

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Show HN: I made a spaced repetition tool to master coding problems

    3 projects | news.ycombinator.com | 26 Apr 2024
  • Burn: Deep Learning Framework built using Rust

    1 project | news.ycombinator.com | 24 Apr 2024
  • FLaNK AI-April 22, 2024

    28 projects | dev.to | 22 Apr 2024
  • Transitioning From PyTorch to Burn

    5 projects | dev.to | 14 Feb 2024
  • Burn Deep Learning Framework Release 0.12.0 Improved API and PyTorch Integration

    1 project | news.ycombinator.com | 31 Jan 2024