neural-compressor VS tflite-micro

Compare neural-compressor vs tflite-micro and see what are their differences.

neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime (by intel)

tflite-micro

Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors). (by tensorflow)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
neural-compressor tflite-micro
3 2
1,964 1,661
4.0% 4.2%
9.8 9.4
5 days ago 5 days ago
Python C++
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

neural-compressor

Posts with mentions or reviews of neural-compressor. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-07-26.

tflite-micro

Posts with mentions or reviews of tflite-micro. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-07-26.

What are some alternatives?

When comparing neural-compressor and tflite-micro you can also consider the following projects:

openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference

onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

mmrazor - OpenMMLab Model Compression Toolbox and Benchmark.

tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators

nebuly - The user analytics platform for LLMs

TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

Lion - Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"

deepsparse - Sparsity-aware deep learning inference runtime for CPUs