The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →
Top 17 Python pruning Projects
-
Torch-Pruning
[CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
sparseml
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
-
neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
-
aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
-
model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
only_train_once
OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM
-
UPop
[ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.
-
OWL
Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity" (by luuyin)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Project mention: Fast Llama 2 on CPUs with Sparse Fine-Tuning and DeepSparse | news.ycombinator.com | 2023-11-23Interesting company. Yannic Kilcher interviewed Nir Shavit last year and they went into some depth: https://www.youtube.com/watch?v=0PAiQ1jTN5k DeepSparse is on GitHub: https://github.com/neuralmagic/deepsparse
Project mention: [P] Help: I want to compress EfficientnetV2 using pruning. | /r/MachineLearning | 2023-06-28I also tried structured pruning from https://github.com/VainF/Torch-Pruning, as they report EfficientNetV2 to be "prunable", but got much worse results. However, the advantage of this approach is that it keeps the model dense, and you can get a real speed-up with common GPUs, while unstructured pruning sparsifies the model and you need hardware that can exploit such sparsity.
Project mention: My SSD suddenly died. I only lost 10 minutes of data, thanks to ZFS | news.ycombinator.com | 2023-08-22For people who don't want to use ZFS but are okay with LVM: wyng-backup (formerly sparsebak)
https://github.com/tasket/wyng-backup
Project mention: Show HN: Compress vision-language and unimodal AI models by structured pruning | news.ycombinator.com | 2023-07-31
Project mention: Outlier Weighed Layerwise Sparsity: A Missing Secret Sauce for Pruning LLMs | news.ycombinator.com | 2023-10-10Paper abstract: Large Language Models (LLMs), renowned for their remarkable performance across diverse domains, present a challenge due to their colossal model size when it comes to practical deployment. In response to this challenge, efforts have been directed toward the application of traditional network pruning techniques to LLMs, uncovering a massive number of parameters can be pruned in one-shot without hurting performance. Building upon insights gained from pre-LLM models, particularly BERT-level language models, prevailing LLM pruning strategies have consistently adhered to the practice of uniformly pruning all layers at equivalent sparsity levels, resulting in robust performance. However, this observation stands in contrast to the prevailing trends observed in the field of vision models, where non-uniform layerwise sparsity typically yields substantially improved results. To elucidate the underlying reasons for this disparity, we conduct a comprehensive analysis of the distribution of token features within LLMs. In doing so, we discover a strong correlation with the emergence of outliers, defined as features exhibiting significantly greater magnitudes compared to their counterparts in feature dimen- sions. Inspired by this finding, we introduce a novel LLM pruning methodology that incorporates a tailored set of non-uniform layerwise sparsity ratios specif- ically designed for LLM pruning, termed as Outlier Weighed Layerwise sparsity (OWL). The sparsity ratio of OWL is directly proportional to the outlier ratio observed within each layer, facilitating a more effective alignment between layer- wise weight sparsity and outlier ratios. Our empirical evaluation, conducted across the LLaMA-V1 family and OPT, spanning various benchmarks, demonstrates the distinct advantages offered by OWL over previous methods. For instance, our approach exhibits a remarkable performance gain, surpassing the state-of-the-art Wanda and SparseGPT by 61.22 and 6.80 perplexity at a high sparsity level of 70%, respectively. Codes are available at https://github.com/luuyin/OWL.
Python pruning related posts
- [P] Help: I want to compress EfficientnetV2 using pruning.
- [R] 🤖🌟 Unlock the Power of Personal AI: Introducing ChatLLaMA, Your Custom Personal Assistant! 🚀💬
- Intel Textual Inversion Training on Hugging Face
- [R] New sparsity research (oBERT) enabled 175X increase in CPU performance for MLPerf submission
- [R] BERT-Large: Prune Once for DistilBERT Inference Performance
- [R] How well do sparse ImageNet models transfer? Prune once and deploy anywhere for inference performance speedups! (arxiv link in comments)
- Still worrying about model compression? MMRazor may work for you.
-
A note from our sponsor - WorkOS
workos.com | 23 Apr 2024
Index
What are some of the best open-source pruning projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | deepsparse | 2,866 |
2 | Torch-Pruning | 2,288 |
3 | sparseml | 1,974 |
4 | neural-compressor | 1,950 |
5 | aimet | 1,900 |
6 | model-optimization | 1,464 |
7 | mmrazor | 1,361 |
8 | nncf | 771 |
9 | Sparsebit | 319 |
10 | sparsify | 315 |
11 | only_train_once | 260 |
12 | wyng-backup | 236 |
13 | UPop | 83 |
14 | delve | 77 |
15 | OWL | 39 |
16 | thesis | 15 |
17 | Pi-SqueezeDet | 2 |
Sponsored