transformer-deploy
BentoML
Our great sponsors
transformer-deploy | BentoML | |
---|---|---|
8 | 16 | |
1,609 | 6,416 | |
1.4% | 3.5% | |
6.8 | 9.8 | |
5 months ago | 8 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
transformer-deploy
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
-
[P] Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels
We work for Lefebvre Sarrut, a leading European legal publisher. Several of our products include transformer models in latency sensitive scenarios (search, content recommendation). So far, ONNX Runtime and TensorRT served us well, and we learned interesting patterns along the way that we shared with the community through an open-source library called transformer-deploy. However, recent changes in our environment made our needs evolve:
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
notebook: https://github.com/ELS-RD/transformer-deploy/blob/main/demo/generative-model/t5.ipynb (Onnx Runtime only)
-
[P] 4.5 times faster Hugging Face transformer inference by modifying some Python AST
Regarding CPU inference, quantization is very easy, and supported by Transformer-deploy , however performance on transformer are very low outside corner cases (like no batch, very short sequence and distilled model), and last Intel generation CPU based instance like C6 or M6 on AWS are quite expensive compared to a cheap GPU like Nvidia T4, to say it otherwise, on transformer, until you are ok with slow inference and takes a small instance (for a PoC for instance), CPU inference is probably not a good idea.
-
[P] Python library to optimize Hugging Face transformer for inference: < 0.5 ms latency / 2850 infer/sec
Want to try it 👉 https://github.com/ELS-RD/transformer-deploy
BentoML
-
Who's hiring developer advocates? (December 2023)
Link to GitHub -->
- Ask HN: Who is hiring? (November 2022)
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
- PostgresML is 8-40x faster than Python HTTP microservices
-
Show HN: Truss – serve any ML model, anywhere, without boilerplate code
In this category I’m a big fan of https://github.com/bentoml/BentoML
What I like about it is their idiomatic developer experience. It reminds me of other Pythonic frameworks like Flask and Django in a good way.
I have no affiliation with them whatsoever, just an admirer.
-
[P] Introducing BentoML 1.0 - A faster way to ship your models to production
Github Page: https://github.com/bentoml/BentoML
-
Show HN: Bentoctl – An open-source Terraform deployment tool for ML
Elastic License 2: https://github.com/bentoml/bentoctl/blob/v0.3.1/LICENSE.md which also applies to their Yatai kubernetes thing, but strangely not (yet?) to the similarly named repo which is Apache-2: https://github.com/bentoml/BentoML/blob/main/LICENSE
-
How to Build a Machine Learning Demo in 2022
Using a general-purpose framework such as FastAPI involves writing a lot of boilerplate code just to get your API endpoint up and running. If deploying a model for a demo is the only thing you are interested in and you do not mind losing some flexibility, you might want to use a specialized serving framework instead. One example is BentoML, which will allow you to get an optimized serving endpoint for your model up and running much faster and with less overhead than a generic web framework. Framework-specific serving solutions such as Tensorflow Serving and TorchServe typically offer optimized performance but can only be used to serve models trained using Tensorflow or PyTorch, respectively.
-
MLH, Open Source, Mapillary & Me
BentoML - BentoML is a flexible, high-performance framework for serving, managing, and deploying machine learning models.
-
Why do so many people think Python is easier to productionize than R?
Also mlflow is not that optimized because it doesnt microbatch like torchserve/tfserving/bentoml. https://github.com/bentoml/BentoML/tree/master/benchmark
What are some alternatives?
fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
clearml - ClearML - Auto-Magical CI/CD to streamline your ML workflow. Experiment Manager, MLOps and Data-Management
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
FasterTransformer - Transformer related optimization, including BERT, GPT
torch2trt - An easy to use PyTorch to TensorRT converter
kubeflow - Machine Learning Toolkit for Kubernetes
streamlit - Streamlit — A faster way to build and share data apps.
Flask - The Python micro framework for building web applications.
Poetry - Python packaging and dependency management made easy