InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now. Learn more β
Top 15 Python pruning Projects
-
Torch-Pruning
[CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.
-
InfluxDB
InfluxDB β Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
-
aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
-
-
model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
only_train_once_personal_footprint
OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM
-
-
UPop
[ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.
-
-
OWL
Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity" (by luuyin)
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Python pruning discussion
Python pruning related posts
-
[P] Help: I want to compress EfficientnetV2 using pruning.
-
[R] π€π Unlock the Power of Personal AI: Introducing ChatLLaMA, Your Custom Personal Assistant! ππ¬
-
Intel Textual Inversion Training on Hugging Face
-
[R] New sparsity research (oBERT) enabled 175X increase in CPU performance for MLPerf submission
-
[R] BERT-Large: Prune Once for DistilBERT Inference Performance
-
[R] How well do sparse ImageNet models transfer? Prune once and deploy anywhere for inference performance speedups! (arxiv link in comments)
-
Still worrying about model compression? MMRazor may work for you.
-
A note from our sponsor - InfluxDB
www.influxdata.com | 21 Jun 2025
Index
What are some of the best open-source pruning projects in Python? This list will help you:
# | Project | Stars |
---|---|---|
1 | Torch-Pruning | 3,038 |
2 | neural-compressor | 2,430 |
3 | aimet | 2,341 |
4 | mmrazor | 1,600 |
5 | model-optimization | 1,536 |
6 | nncf | 1,044 |
7 | Sparsebit | 331 |
8 | sparsify | 326 |
9 | only_train_once_personal_footprint | 304 |
10 | wyng-backup | 257 |
11 | UPop | 102 |
12 | delve | 81 |
13 | OWL | 67 |
14 | thesis | 17 |
15 | Pi-SqueezeDet | 2 |