only_train_once
Torch-Pruning
only_train_once | Torch-Pruning | |
---|---|---|
1 | 2 | |
262 | 2,340 | |
- | - | |
8.9 | 9.4 | |
9 days ago | 15 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
only_train_once
-
OTOv2 Automatic One-Shot General DNN Training and Compression Framework.
Github: https://github.com/tianyic/only_train_once
Torch-Pruning
-
[P] Help: I want to compress EfficientnetV2 using pruning.
I also tried structured pruning from https://github.com/VainF/Torch-Pruning, as they report EfficientNetV2 to be "prunable", but got much worse results. However, the advantage of this approach is that it keeps the model dense, and you can get a real speed-up with common GPUs, while unstructured pruning sparsifies the model and you need hardware that can exploit such sparsity.
-
[D] What is your go-to implementation for structured pruning?
I'm currently using this repo and I find it very intuitive: https://github.com/VainF/Torch-Pruning. Not sure if it fits your needs, but check it out :)
What are some alternatives?
archai - Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.
SadTalker - [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
delve - PyTorch model training and layer saturation monitor
efficient-gnns - Code and resources on scalable and efficient Graph Neural Networks
model-optimization - A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
nni - An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
ds2 - Easiest way to use AI models without coding (Web UI & API support)
UPop - [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.