mmrazor
neural-compressor
Our great sponsors
mmrazor | neural-compressor | |
---|---|---|
4 | 3 | |
1,365 | 1,950 | |
3.8% | 6.5% | |
2.8 | 9.8 | |
17 days ago | 7 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mmrazor
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
MMRazor: OpenMMLab model compression toolbox and benchmark.
- Still worrying about model compression? MMRazor may work for you.
- Still worrying about model compression? MMRazor is all you need
-
[P] 4.5 times faster Hugging Face transformer inference by modifying some Python AST
https://github.com/open-mmlab/mmrazor ,it may work for you~
neural-compressor
- Intel Textual Inversion Training on Hugging Face
-
An open-source library for optimizing deep learning inference. (1) You select the target optimization, (2) nebullvm searches for the best optimization techniques for your model-hardware configuration, and then (3) serves an optimized model that runs much faster in inference
Open-source projects leveraged by nebullvm include OpenVINO, TensorRT, Intel Neural Compressor, SparseML and DeepSparse, Apache TVM, ONNX Runtime, TFlite and XLA. A huge thank you to the open-source community for developing and maintaining these amazing projects.
-
Meet IntelĀ® Neural Compressor: An Open-Source Python Library for Model Compression that Reduces the Model Size and Increases the Speed of Deep Learning Inference for Deployment on CPUs or GPUs
Continue reading | The Github repo for the library can be accessed here.
What are some alternatives?
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for š¤ Hugging Face transformer models š
openvino - OpenVINOā¢ is an open-source toolkit for optimizing and deploying AI inference
mmaction2 - OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
tflite-micro - Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).
Pointnet_Pointnet2_pytorch - PointNet and PointNet++ implemented by pytorch (pure python) and on ModelNet, ShapeNet and S3DIS.
nebuly - The user analytics platform for LLMs
PaddleViT - :robot: PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2.0+
TensorRT - NVIDIAĀ® TensorRTā¢ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
ttach - Image Test Time Augmentation with PyTorch!
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
sparsednn - Fast sparse deep learning on CPUs
Lion - Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"