finn

Dataflow compiler for QNN inference on FPGAs (by Xilinx)

Finn Alternatives

Similar projects and alternatives to finn based on common topics and language

  • hls4ml

    Machine learning on FPGAs using HLS

  • intel-extension-for-pytorch

    A Python package for extending the official PyTorch that can easily obtain performance on Intel platform

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • qkeras

    3 finn VS qkeras

    QKeras: a quantization deep learning library for Tensorflow Keras

  • deepsocflow

    An Open Workflow to Build Custom SoCs and run Deep Models at the Edge

  • nngen

    1 finn VS nngen

    NNgen: A Fully-Customizable Hardware Synthesis Compiler for Deep Neural Network

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better finn alternative or higher similarity.

finn reviews and mentions

Posts with mentions or reviews of finn. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-13.
  • Hi, What could be the best HLS tool for implementing neural networks on FPGA
    2 projects | /r/FPGA | 13 Jun 2023
    FINN - https://github.com/Xilinx/finn
  • Can anyone tell if Xilinx's FINN (from Xilinx's research lab) is restricted for use only to xilinx based FPGAs?
    2 projects | /r/FPGA | 8 Apr 2023
    Seems fine to use on other FPGAs, there are some clauses you need to abide by. https://github.com/Xilinx/finn/blob/main/LICENSE.txt
  • Sub ms - 3ms Latency Vision task on FPGA
    2 projects | /r/FPGA | 5 Feb 2023
    It really depends on the type of data you are using. There may (or may not) be some trade offs and sacrifices. There are frameworks which can basically translate your neural network information from a high level python code into equivalent HLS code which is optimized for low latency when inferred on FPGAs. Some frameworks which might be useful for you to explore are hls4ml and finn. These are some frameworks which can achieve low latency inference of neural networks on FPGAs using Xilinx Vitis HLS. These are what I found when I did a similar experiment but with much lower latency target (a few hundred ns) and a very simple MLP with 1D signal as input which was a year ago. Not sure if there are better alternatives available as of 2023. But conceptually all these work on the primary principle of having a supporting framework/methodology to first quantize the network and limit the precision of data to fixed point. The HLS then produced will also be a result of the framework applying dataflow techniques such that the resulting HLS code will produce an RTL which has the best overall latency.
  • A note from our sponsor - WorkOS
    workos.com | 25 Apr 2024
    The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →

Stats

Basic finn repo stats
4
661
0.0
6 days ago

Xilinx/finn is an open source project licensed under BSD 3-clause "New" or "Revised" License which is an OSI approved license.

The primary programming language of finn is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com