kernl
optimum
Our great sponsors
kernl | optimum | |
---|---|---|
8 | 8 | |
1,457 | 2,141 | |
1.8% | 6.5% | |
1.5 | 9.5 | |
2 months ago | 1 day ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kernl
-
[P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
I periodically check kernl.ai to see whether the documentation and tutorial sections have been expanded. My advice is put some real effort and focus in to examples and tutorials. It is key for an optimization/acceleration library. 10x-ing the users of a library like this is much more likely to come from spending 10 out of every 100 developer hours writing tutorials, as opposed to spending those 8 or 9 of those tutorial-writing hours on developing new features which only a small minority understand how to apply.
-
[P] BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
FlashAttention + quantization has to the best of knowledge not yet been explored, but I think it would a great engineering direction. I would not expect to see this any time soon natively in PyTorch's BetterTransformer though. /u/pommedeterresautee & folks at ELS-RD made an awesome work releasing kernl where custom implementations (through OpenAI Triton) could maybe easily live.
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
Check https://github.com/ELS-RD/kernl/blob/main/src/kernl/optimizer/linear.py for an example.
-
[P] Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels
https://github.com/ELS-RD/kernl/issues/141 > Would it be possible to use kernl to speed up Stable Diffusion?
optimum
-
FastEmbed: Fast and Lightweight Embedding Generation for Text
Shout out to Huggingface's Optimum – which made it easier to quantize models.
-
[D] Is ML doomed to end up closed-source?
Optimum to accelerate inference of transformers with hardware optimization
-
[P] BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
Yes Optimum lib's documentation is unfortunately not yet in best shape. I would be really thankful if you fill an issue detailing where the doc can be improved: https://github.com/huggingface/optimum/issues . Also, if you have features request, such as having a more flexible API, we are eager for community contributions or suggestions!
-
BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
In order to support BetterTransformer with the canonical Transformer models from Transformers library, an integration was done with the open-source library Optimum as a one-liner:
- Why are self attention not as deployment friendly?
-
[P] Accelerated Inference with Optimum and Transformers Pipelines
It’s Lewis here from the open-source team at Hugging Face 🤗. I'm excited to share the latest release of our Optimum library, which provides a suite of performance optimization tools to make Transformers run fast on accelerated hardware!
-
[N] Hugging Face raised $100M at $2B to double down on community, open-source & ethics
Create libraries to optimize ML models during training and inference for specific hardware https://github.com/huggingface/optimum
-
[P] Python library to optimize Hugging Face transformer for inference: < 0.5 ms latency / 2850 infer/sec
Have you seen this article from HF https://huggingface.co/blog/bert-cpu-scaling-part-2 , there is also a lib https://github.com/huggingface/optimum? is the gain worth the tweaking? is OneDNN stuff easy to deploy on Triton?
What are some alternatives?
openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment
FasterTransformer - Transformer related optimization, including BERT, GPT
flash-attention - Fast and memory-efficient exact attention
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
safetensors - Simple, safe way to store and distribute tensors
stable-diffusion-webui - Stable Diffusion web UI
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!
text-generation-inference - Large Language Model Text Generation Inference
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.