cONNXr
ML-examples
cONNXr | ML-examples | |
---|---|---|
2 | 2 | |
175 | 405 | |
- | 2.0% | |
0.0 | 5.0 | |
6 months ago | 9 months ago | |
C | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cONNXr
-
[D] Run Pytorch model inference on Microcontroller
cONNXr - framework with C99 inference engine. Also interesting and not very active.
- [D] Machine Learning Expertise Combined with Embedded Knowlege
ML-examples
-
[D] Run Pytorch model inference on Microcontroller
CMSIS-NN. ARM centric. Examples. They also have an example for a pytorch to tflite converter via onnx
-
Machine Learning on ARM
Well there's something, https://github.com/ARM-software/ML-examples
What are some alternatives?
nanopb-example - This is a simple project created to test the capabilities of Google's protobuf C implementation, nanopb.
MNN - MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
CMSIS-NN - CMSIS-NN Library
oneflow - OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
ai8x-synthesis - Quantization and Synthesis (Device Specific Code Generation) for ADI's MAX78000 and MAX78002 Edge AI Devices
tensorflow - An Open Source Machine Learning Framework for Everyone
deepC - vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers
onnx2c - Open Neural Network Exchange to C compiler.
MaximAI_Documentation - START HERE: Documentation for ADI's MAX78000 and MAX78002 Edge AI devices
CNTK - Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
nnom - A higher-level Neural Network library for microcontrollers.
tinyengine - [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory