larq
nngen
larq | nngen | |
---|---|---|
2 | 1 | |
692 | 319 | |
0.3% | 1.9% | |
7.5 | 4.9 | |
17 days ago | 7 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
larq
-
Running CNN on ATmega328P
You quantize the model parameters i.e., don't just send the model in which uses floating point math instead change it to fixed point. This has 2 advantages 1) a pure size reduction and 2) most low power MCU's don't have float point multipliers but do have single cycle fixed point multipliers. This is a classic DSP trick used for a long time. The real research aspects come-in as you start dropping below 8-bit; even coming down to single-bit in some cases(see Larq)
-
Simplifying AI to FPGA deployment, looking for opportunities
It is a difficult question. I work almost exclusively with open source, so I'm not much use to give you advice. Maybe you can see how Plumerai handles things -- they have some stuff proprietary, but they've also open-sourced their BNN Larq stuff: https://github.com/larq/larq
nngen
-
Simplifying AI to FPGA deployment, looking for opportunities
Yes, like u/ComeGateMeBro, I also thought of hls4ml, and also something else I just found from Japan: NNgen, https://github.com/NNgen/nngen
What are some alternatives?
finn-examples - Dataflow QNN inference accelerator examples on FPGAs
TensorLayer - Deep Learning and Reinforcement Learning Library for Scientists and Engineers
model-optimization - A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Pyverilog - Python-based Hardware Design Processing Toolkit for Verilog HDL
data-science-ipython-notebooks - Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.
PipelineC - A C-like hardware description language (HDL) adding high level synthesis(HLS)-like automatic pipelining as a language construct/compiler feature.
nni - An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
dace - DaCe - Data Centric Parallel Programming