optimum
onnxruntime
optimum | onnxruntime | |
---|---|---|
8 | 58 | |
2,209 | 12,960 | |
6.4% | 4.4% | |
9.5 | 10.0 | |
1 day ago | 4 days ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
optimum
-
FastEmbed: Fast and Lightweight Embedding Generation for Text
Shout out to Huggingface's Optimum – which made it easier to quantize models.
-
[D] Is ML doomed to end up closed-source?
Optimum to accelerate inference of transformers with hardware optimization
-
[P] BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
Yes Optimum lib's documentation is unfortunately not yet in best shape. I would be really thankful if you fill an issue detailing where the doc can be improved: https://github.com/huggingface/optimum/issues . Also, if you have features request, such as having a more flexible API, we are eager for community contributions or suggestions!
-
BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
In order to support BetterTransformer with the canonical Transformer models from Transformers library, an integration was done with the open-source library Optimum as a one-liner:
- Why are self attention not as deployment friendly?
-
[P] Accelerated Inference with Optimum and Transformers Pipelines
It’s Lewis here from the open-source team at Hugging Face 🤗. I'm excited to share the latest release of our Optimum library, which provides a suite of performance optimization tools to make Transformers run fast on accelerated hardware!
-
[N] Hugging Face raised $100M at $2B to double down on community, open-source & ethics
Create libraries to optimize ML models during training and inference for specific hardware https://github.com/huggingface/optimum
-
[P] Python library to optimize Hugging Face transformer for inference: < 0.5 ms latency / 2850 infer/sec
Have you seen this article from HF https://huggingface.co/blog/bert-cpu-scaling-part-2 , there is also a lib https://github.com/huggingface/optimum? is the gain worth the tweaking? is OneDNN stuff easy to deploy on Triton?
onnxruntime
-
Machine Learning with PHP
ONNX Runtime: ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
-
AI Inference now available in Supabase Edge Functions
Embedding generation uses the ONNX runtime under the hood. This is a cross-platform inferencing library that supports multiple execution providers from CPU to specialized GPUs.
-
Deep Learning in JavaScript
tfjs is dead, looking at the commit history. The standard now is to convert PyTorch to onnx, then use onnxruntime (https://github.com/microsoft/onnxruntime/tree/main/js/web) to run the model on the browsdr.
- FLaNK Stack 05 Feb 2024
-
Vcc – The Vulkan Clang Compiler
- slang[2] has the potential, but the meta programming part is not as strong as C++, existing libraries cannot be used.
The above conclusion is drawn from my work https://github.com/microsoft/onnxruntime/tree/dev/opencl, purely nightmare to work with thoes drivers and jit compilers. Hopefully Vcc can take compute shader more seriously.
[1]: https://www.circle-lang.org/
-
Oracle-samples/sd4j: Stable Diffusion pipeline in Java using ONNX Runtime
I did. It depends what you want, for an overview of how ONNX Runtime works then Microsoft have a bunch of things on https://onnxruntime.ai, but the Java content is a bit lacking on there as I've not had time to write much. Eventually I'll probably write something similar to the C# SD tutorial they have on there but for the Java API.
For writing ONNX models from Java we added an ONNX export system to Tribuo in 2022 which can be used by anything on the JVM to export ONNX models in an easier way than writing a protobuf directly. Tribuo doesn't have full coverage of the ONNX spec, but we're happy to accept PRs to expand it, otherwise it'll fill out as we need it.
- Mamba-Chat: A Chat LLM based on State Space Models
-
VectorDB: Vector Database Built by Kagi Search
What about models besides GPT? Most of the popular vector encoding models aren't using this architecture.
If you really didn't want PyTorch/Transformers, you could consider exporting your models to ONNX (https://github.com/microsoft/onnxruntime).
- ONNX runtime: Cross-platform accelerated machine learning
- Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
What are some alternatives?
FasterTransformer - Transformer related optimization, including BERT, GPT
onnx - Open standard for machine learning interoperability
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
safetensors - Simple, safe way to store and distribute tensors
onnx-simplifier - Simplify your onnx model
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
ONNX-YOLOv7-Object-Detection - Python scripts performing object detection using the YOLOv7 model in ONNX.
text-generation-inference - Large Language Model Text Generation Inference
onnx-tensorflow - Tensorflow Backend for ONNX
kernl - Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
MLflow - Open source platform for the machine learning lifecycle