finn
nngen
finn | nngen | |
---|---|---|
4 | 1 | |
665 | 318 | |
3.2% | 1.6% | |
9.7 | 4.9 | |
9 days ago | 7 months ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
finn
-
Hi, What could be the best HLS tool for implementing neural networks on FPGA
FINN - https://github.com/Xilinx/finn
-
Can anyone tell if Xilinx's FINN (from Xilinx's research lab) is restricted for use only to xilinx based FPGAs?
Seems fine to use on other FPGAs, there are some clauses you need to abide by. https://github.com/Xilinx/finn/blob/main/LICENSE.txt
-
Sub ms - 3ms Latency Vision task on FPGA
It really depends on the type of data you are using. There may (or may not) be some trade offs and sacrifices. There are frameworks which can basically translate your neural network information from a high level python code into equivalent HLS code which is optimized for low latency when inferred on FPGAs. Some frameworks which might be useful for you to explore are hls4ml and finn. These are some frameworks which can achieve low latency inference of neural networks on FPGAs using Xilinx Vitis HLS. These are what I found when I did a similar experiment but with much lower latency target (a few hundred ns) and a very simple MLP with 1D signal as input which was a year ago. Not sure if there are better alternatives available as of 2023. But conceptually all these work on the primary principle of having a supporting framework/methodology to first quantize the network and limit the precision of data to fixed point. The HLS then produced will also be a result of the framework applying dataflow techniques such that the resulting HLS code will produce an RTL which has the best overall latency.
nngen
-
Simplifying AI to FPGA deployment, looking for opportunities
Yes, like u/ComeGateMeBro, I also thought of hls4ml, and also something else I just found from Japan: NNgen, https://github.com/NNgen/nngen
What are some alternatives?
hls4ml - Machine learning on FPGAs using HLS
TensorLayer - Deep Learning and Reinforcement Learning Library for Scientists and Engineers
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Pyverilog - Python-based Hardware Design Processing Toolkit for Verilog HDL
qkeras - QKeras: a quantization deep learning library for Tensorflow Keras
finn-examples - Dataflow QNN inference accelerator examples on FPGAs
PipelineC - A C-like hardware description language (HDL) adding high level synthesis(HLS)-like automatic pipelining as a language construct/compiler feature.
nni - An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
larq - An Open-Source Library for Training Binarized Neural Networks
dace - DaCe - Data Centric Parallel Programming