qkeras VS model-optimization

Compare qkeras vs model-optimization and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
qkeras model-optimization
3 1
522 1,470
1.1% 0.8%
6.6 6.8
about 2 months ago 10 days ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

qkeras

Posts with mentions or reviews of qkeras. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-07-06.
  • How to build FPGA-based ML accelerator?
    3 projects | /r/FPGA | 6 Jul 2022
    I would check out hls4ml. It's an open source project made by/for people at CERN to convert neural networks created in Python using QKeras (a quantization extension of Keras) into HLS, with Vivado HLS being the most well supported. There are some caveats though, and a fellow student and I have had trouble getting the generated HLS to match the Keras model and be feasible to synthesize, but it seems to work well for smaller neural networks.
  • FPGA Neural Network
    2 projects | /r/FPGA | 3 Apr 2021
    For quantization-aware training, there's also a tool we integrate with called qkeras: https://github.com/google/qkeras/tree/master/qkeras
  • [D] How to Quantize a CNN; And how to deal with a professor...
    1 project | /r/MachineLearning | 31 Jan 2021
    Brevitas appears to be what you're looking for. I haven't used that but developed something similar myself for a previous project. You could take a look at https://github.com/google/qkeras too

model-optimization

Posts with mentions or reviews of model-optimization. We have used some of these posts to build our list of alternatives and similar projects.
  • Need Help With Pruning Model Weights in Tensorflow 2
    1 project | /r/tensorflow | 7 Jun 2021
    I have been following the example shown here, and so far I've had mixed results and wanted to ask for some help because the resources I've found online have not been able to answer some of my questions (perhaps because some of these are obvious and I am just being dumb).

What are some alternatives?

When comparing qkeras and model-optimization you can also consider the following projects:

hls4ml - Machine learning on FPGAs using HLS

deepsparse - Sparsity-aware deep learning inference runtime for CPUs

aimet - AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.

sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

conifer - Collect and revisit web pages.

3d-model-convert-to-gltf - Convert 3d model (STL/IGES/STEP/OBJ/FBX) to gltf and compression

horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.

d2l-en - Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.

larq - An Open-Source Library for Training Binarized Neural Networks

Keras - Deep Learning for humans

only_train_once - OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM