larq

An Open-Source Library for Training Binarized Neural Networks (by larq)

Larq Alternatives

Similar projects and alternatives to larq based on common topics and language

  • finn-examples

    1 larq VS finn-examples

    Dataflow QNN inference accelerator examples on FPGAs

  • model-optimization

    A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • nngen

    1 larq VS nngen

    NNgen: A Fully-Customizable Hardware Synthesis Compiler for Deep Neural Network

  • data-science-ipython-notebooks

    Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

  • qkeras

    3 larq VS qkeras

    QKeras: a quantization deep learning library for Tensorflow Keras

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better larq alternative or higher similarity.

larq reviews and mentions

Posts with mentions or reviews of larq. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-03-09.
  • Running CNN on ATmega328P
    1 project | /r/embedded | 1 Apr 2022
    You quantize the model parameters i.e., don't just send the model in which uses floating point math instead change it to fixed point. This has 2 advantages 1) a pure size reduction and 2) most low power MCU's don't have float point multipliers but do have single cycle fixed point multipliers. This is a classic DSP trick used for a long time. The real research aspects come-in as you start dropping below 8-bit; even coming down to single-bit in some cases(see Larq)
  • Simplifying AI to FPGA deployment, looking for opportunities
    3 projects | /r/FPGA | 9 Mar 2022
    It is a difficult question. I work almost exclusively with open source, so I'm not much use to give you advice. Maybe you can see how Plumerai handles things -- they have some stuff proprietary, but they've also open-sourced their BNN Larq stuff: https://github.com/larq/larq

Stats

Basic larq repo stats
2
693
7.5
4 days ago

larq/larq is an open source project licensed under Apache License 2.0 which is an OSI approved license.

The primary programming language of larq is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com