kernl
serve
Our great sponsors
kernl | serve | |
---|---|---|
8 | 11 | |
1,446 | 3,924 | |
1.9% | 2.0% | |
1.5 | 9.6 | |
about 1 month ago | 5 days ago | |
Jupyter Notebook | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kernl
-
[P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
I periodically check kernl.ai to see whether the documentation and tutorial sections have been expanded. My advice is put some real effort and focus in to examples and tutorials. It is key for an optimization/acceleration library. 10x-ing the users of a library like this is much more likely to come from spending 10 out of every 100 developer hours writing tutorials, as opposed to spending those 8 or 9 of those tutorial-writing hours on developing new features which only a small minority understand how to apply.
Kernl repository: https://github.com/ELS-RD/kernl
-
[P] BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
FlashAttention + quantization has to the best of knowledge not yet been explored, but I think it would a great engineering direction. I would not expect to see this any time soon natively in PyTorch's BetterTransformer though. /u/pommedeterresautee & folks at ELS-RD made an awesome work releasing kernl where custom implementations (through OpenAI Triton) could maybe easily live.
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
Check https://github.com/ELS-RD/kernl/blob/main/src/kernl/optimizer/linear.py for an example.
-
[P] Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels
https://github.com/ELS-RD/kernl/issues/141 > Would it be possible to use kernl to speed up Stable Diffusion?
Quite surprisingly, RMSNorm bring a huge unexpected speedup on what we already had! If you want to follow this work: https://github.com/ELS-RD/kernl/pull/107
Scripts are here: https://github.com/ELS-RD/kernl/tree/main/experimental/benchmarks
We are releasing Kernl under Apache 2 license, a library to make PyTorch models inference significantly faster. With 1 line of code we applied the optimizations and made Bert up to 12X faster than Hugging Face baseline. T5 is also covered in this first release (> 6X speed up generation and we are still halfway in the optimizations!). This has been possible because we wrote custom GPU kernels with the new OpenAI programming language Triton and leveraged TorchDynamo.
serve
-
Show Show HN: Llama2 Embeddings FastAPI Server
What's wrong with just using Torchserve[1]? We've been using it to serve embedding models in production.
-
BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
I did a Space to showcase a bit the speedups we can have in a end-to-end case with TorchServe to deploy the model on a cloud instance (AWS EC2 g4dn, using one T4 GPU): https://huggingface.co/spaces/fxmarty/bettertransformer-demo
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
- Choose JavaScript 🧠
-
Popular Machine Learning Deployment Tools
GitHub
What are some alternatives?
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment
flash-attention - Fast and memory-efficient exact attention
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
serving - A flexible, high-performance serving system for machine learning models
JavaScriptClassifier - [Moved to: https://github.com/JonathanSum/JavaScriptClassifier]
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
BentoML - Build Production-Grade AI Applications
stable-diffusion-webui - Stable Diffusion web UI
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
optimum - 🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools