sparseml
tvm
sparseml | tvm | |
---|---|---|
12 | 16 | |
1,979 | 11,186 | |
1.1% | 1.3% | |
9.6 | 9.9 | |
2 days ago | 6 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sparseml
- Can You Achieve GPU Performance When Running CNNs on a CPU?
-
[D] DeepSparse: 1,000X CPU Performance Boost & 92% Power Reduction with Sparsified Models in MLPerf™ Inference v3.0
SparseML is opensource https://github.com/neuralmagic/sparseml
-
[R] New sparsity research (oBERT) enabled 175X increase in CPU performance for MLPerf submission
Utilizing the oBERT research we published at Neural Magic and some further iteration, we’ve enabled an increase in NLP performance of 175X while retaining 99% accuracy on the question-answering task in MLPerf. A combination of distillation, layer dropping, quantization, and unstructured pruning with oBERT enabled these large performance gains through the DeepSparse Engine. All of our contributions and research are open-sourced or free to use. Read through the oBERT paper on arxiv, try out the research in SparseML, and dive into the writeup to learn more about how we achieved these impressive results and utilize them for your own use cases!
-
An open-source library for optimizing deep learning inference. (1) You select the target optimization, (2) nebullvm searches for the best optimization techniques for your model-hardware configuration, and then (3) serves an optimized model that runs much faster in inference
Open-source projects leveraged by nebullvm include OpenVINO, TensorRT, Intel Neural Compressor, SparseML and DeepSparse, Apache TVM, ONNX Runtime, TFlite and XLA. A huge thank you to the open-source community for developing and maintaining these amazing projects.
-
[R] BERT-Large: Prune Once for DistilBERT Inference Performance
BERT-Large (345 million parameters) is now faster than the much smaller DistilBERT (66 million parameters) all while retaining the accuracy of the much larger BERT-Large model! We made this possible with Intel Labs by applying cutting-edge sparsification and quantization research from their Prune Once For All paper and utilizing it in the DeepSparse engine. It makes BERT-Large 12x smaller while delivering 8x latency speedup on commodity CPUs. We open-sourced the research in SparseML; run through the overview here and give it a try!
-
[R] How well do sparse ImageNet models transfer? Prune once and deploy anywhere for inference performance speedups! (arxiv link in comments)
All models and code are open-sourced, try it out with the walk-through in SparseML.
-
[P] Compound sparsification: using pruning, quantization, and layer dropping to improve BERT performance
Hi u/_Arsenie_Boca_, definitely. Our recipes and sparse models along with the SparseZoo Python API to download them are open-sourced and the SparseZoo UI that can be used to explore them is free to use. The SparseML codebase to apply recipes enabling the creation of the sparse models is open sourced. The Sparsify codebase to create recipes through a UI is as well. And finally, the DeepSparse Engine's backend is closed sourced but free to use.
-
Tutorial: Prune and quantize YOLOv5 for 12x smaller size and 10x better performance on CPUs
Hi mikedotonline, we haven't focused on any datasets specifically for natural/forest environments. If you have any in mind, we could do some quick transfer learning runs to see how these models perform on them! Also if you wanted to try them out, we have a tutorial pushed up that walks through transfer learning the sparse architectures to new data: https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/tutorials/yolov5_sparse_transfer_learning.md
-
Tutorial: Real-time YOLOv3 on a Laptop Using Sparse Quantization
Apply the sparse-quantized results to your dataset by following the YOLOv3 tutorial. All software is open source or freely available.
-
Pruning and Quantizing Ultralytics YOLOv3
We’ve noticed YOLOv3 runs pretty slowly on CPUs restricting its use for real-time requests. Given that, we looked into combining pruning and quantization using the Ultralytics YOLOv3 model, and the results turned out well, over 5X faster over a dense FP32 baseline! We open sourced the integration and models on GitHub for anyone to play around with; if you’re interested, please check it out and give us feedback.
tvm
-
Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU
Yes. Web-llm is a wrapper of tvmjs: https://github.com/apache/tvm
Just wrappers all the way down
-
Making AMD GPUs competitive for LLM inference
Yes, this is coming! Myself and others at OctoML and in the TVM community are actively working on multi-gpu support in the compiler and runtime. Here are some of the merged and active PRs on the multi-GPU (multi-device) roadmap:
Support in TVM’s graph IR (Relax) - https://github.com/apache/tvm/pull/15447
-
VSL; Vlang's Scientific Library
Would it make sense to have a backend support for OpenXLA, Apache TVM, Jittor or other similar to get free GPU, TPU and other accelerators for free ?
- Apache TVM
-
MLC LLM - "MLC LLM is a universal solution that allows any language model to be deployed natively on a diverse set of hardware backends and native applications, plus a productive framework for everyone to further optimize model performance for their own use cases."
I have tried the iPhone app. It's fast. They're using Apache TVM which should allow better use of native accelerators on different devices. Like using metal on Apple and Vulcan or CUDA or whatever instead of just running the thing on the CPU like llama.cpp.
-
ONNX Runtime merges WebGPU back end
I was going to answer the same, I find the approach of machine learning compilers that directly compile models to host and device code better than having to bring a huge runtime. There are exciting projects in this area like TVM Unity, IREE [2], or torch.export [3]
[1] https://github.com/apache/tvm/tree/unity
[2] https://pytorch.org/get-started/pytorch-2.0/#inference-and-e...
[3] https://pytorch.org/get-started/pytorch-2.0/#inference-and-e...
-
Esp32 tensorflow lite
Apache TVM home page: https://tvm.apache.org/
-
Decompiling x86 Deep Neural Network Executables
It's pretty clear its referring to the output of Apache TVM and Meta's Glow
-
Run Stable Diffusion on Your M1 Mac’s GPU
As mentioned in sibling comments, Torch is indeed the glue in this implementation. Other glues are TVM[0] and ONNX[1]
These just cover the neural net though, and there is lots of surrounding code and pre-/post-processing that isn't covered by these systems.
For models on Replicate, we use Docker, packaged with Cog for this stuff.[2] Unfortunately Docker doesn't run natively on Mac, so if we want to use the Mac's GPU, we can't use Docker.
I wish there was a good container system for Mac. Even better if it were something that spanned both Mac and Linux. (Not as far-fetched as it seems... I used to work at Docker and spent a bit of time looking into this...)
[0] https://tvm.apache.org/
-
How to get started with machine learning.
Or use TVM, the idea is to compile your model into code that you can load at runtime. Similar to onnxruntime, it only does DNN inference; so you need domain-specific code.
What are some alternatives?
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
model-optimization - A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
sparsify - ML model optimization product to accelerate inference.
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
LAVIS - LAVIS - A One-stop Library for Language-Vision Intelligence
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
tflite-micro - Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).
nebuly - The user analytics platform for LLMs
pytorch2keras - PyTorch to Keras model convertor
stable-diffusion