model-optimization
Pretrained-Language-Model
model-optimization | Pretrained-Language-Model | |
---|---|---|
1 | 1 | |
1,470 | 2,960 | |
0.8% | 0.5% | |
6.8 | 6.1 | |
11 days ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
model-optimization
-
Need Help With Pruning Model Weights in Tensorflow 2
I have been following the example shown here, and so far I've had mixed results and wanted to ask for some help because the resources I've found online have not been able to answer some of my questions (perhaps because some of these are obvious and I am just being dumb).
Pretrained-Language-Model
What are some alternatives?
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
SqueezeLLM - [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
qkeras - QKeras: a quantization deep learning library for Tensorflow Keras
DWPose - "Effective Whole-body Pose Estimation with Two-stages Distillation" (ICCV 2023, CV4Metaverse Workshop)
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
Torch-Pruning - [CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs
3d-model-convert-to-gltf - Convert 3d model (STL/IGES/STEP/OBJ/FBX) to gltf and compression
Efficient-AI-Backbones - Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
aimet - AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
PaddleClas - A treasure chest for visual classification and recognition powered by PaddlePaddle
larq - An Open-Source Library for Training Binarized Neural Networks
Lion - Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"