UPop
model-optimization
UPop | model-optimization | |
---|---|---|
1 | 1 | |
82 | 1,470 | |
- | 0.8% | |
8.4 | 6.8 | |
6 months ago | 7 days ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
UPop
model-optimization
-
Need Help With Pruning Model Weights in Tensorflow 2
I have been following the example shown here, and so far I've had mixed results and wanted to ask for some help because the resources I've found online have not been able to answer some of my questions (perhaps because some of these are obvious and I am just being dumb).
What are some alternatives?
Torch-Pruning - [CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
image-captioning - Image captioning using python and BLIP
qkeras - QKeras: a quantization deep learning library for Tensorflow Keras
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
3d-model-convert-to-gltf - Convert 3d model (STL/IGES/STEP/OBJ/FBX) to gltf and compression
neural-compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
aimet - AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
OFA - Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
larq - An Open-Source Library for Training Binarized Neural Networks