UPop
Torch-Pruning
UPop | Torch-Pruning | |
---|---|---|
1 | 2 | |
82 | 2,324 | |
- | - | |
8.4 | 9.4 | |
6 months ago | 10 days ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
UPop
Torch-Pruning
-
[P] Help: I want to compress EfficientnetV2 using pruning.
I also tried structured pruning from https://github.com/VainF/Torch-Pruning, as they report EfficientNetV2 to be "prunable", but got much worse results. However, the advantage of this approach is that it keeps the model dense, and you can get a real speed-up with common GPUs, while unstructured pruning sparsifies the model and you need hardware that can exploit such sparsity.
-
[D] What is your go-to implementation for structured pruning?
I'm currently using this repo and I find it very intuitive: https://github.com/VainF/Torch-Pruning. Not sure if it fits your needs, but check it out :)
What are some alternatives?
image-captioning - Image captioning using python and BLIP
SadTalker - [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
efficient-gnns - Code and resources on scalable and efficient Graph Neural Networks
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
only_train_once - OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM
neural-compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
nni - An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
model-optimization - A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Painter - Painter & SegGPT Series: Vision Foundation Models from BAAI
OFA - Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
openscene - [CVPR'23] OpenScene: 3D Scene Understanding with Open Vocabularies