neural-compressor VS tvm

Compare neural-compressor vs tvm and see what are their differences.

neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime (by intel)

tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators (by apache)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
neural-compressor tvm
3 16
1,964 11,186
4.0% 1.3%
9.8 9.9
6 days ago 7 days ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

neural-compressor

Posts with mentions or reviews of neural-compressor. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-07-26.

tvm

Posts with mentions or reviews of tvm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-03.

What are some alternatives?

When comparing neural-compressor and tvm you can also consider the following projects:

openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference

TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

tflite-micro - Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).

mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

mmrazor - OpenMMLab Model Compression Toolbox and Benchmark.

onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

nebuly - The user analytics platform for LLMs

stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]

Lion - Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"

stable-diffusion