mmrazor
openvino
Our great sponsors
mmrazor | openvino | |
---|---|---|
4 | 17 | |
1,365 | 5,911 | |
3.8% | 6.6% | |
2.8 | 10.0 | |
17 days ago | 4 days ago | |
Python | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mmrazor
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
MMRazor: OpenMMLab model compression toolbox and benchmark.
- Still worrying about model compression? MMRazor may work for you.
- Still worrying about model compression? MMRazor is all you need
-
[P] 4.5 times faster Hugging Face transformer inference by modifying some Python AST
https://github.com/open-mmlab/mmrazor ,it may work for you~
openvino
- FLaNK Stack 05 Feb 2024
- QUIK is a method for quantizing LLM post-training weights to 4 bit precision
- Intel OpenVINO 2023.1.0 released
- Intel OpenVINO 2023.1.0 released, open-source toolkit for optimizing and deploying AI inference
- OpenVINO 2023.1.0 released
- [N] Intel OpenVINO 2023.1.0 released, open-source toolkit for optimizing and deploying AI inference
-
Powering Anomaly Detection for Industry 4.0
Anomalib is an open-source deep learning library developed by Intel that makes it easy to benchmark different anomaly detection algorithms on both public and custom datasets, all by simply modifying a config file. As the largest public collection of anomaly detection algorithms and datasets, it has a strong focus on image-based anomaly detection. It’s a comprehensive, end-to-end solution that includes cutting-edge algorithms, relevant evaluation methods, prediction visualizations, hyperparameter optimization, and inference deployment code with Intel’s OpenVINO Toolkit.
What are some alternatives?
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
neural-compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
mmaction2 - OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
mediapipe - Cross-platform, customizable ML solutions for live and streaming media.
Pointnet_Pointnet2_pytorch - PointNet and PointNet++ implemented by pytorch (pure python) and on ModelNet, ShapeNet and S3DIS.
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.
PaddleViT - :robot: PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2.0+
ttach - Image Test Time Augmentation with PyTorch!
nebuly - The user analytics platform for LLMs