transformer-deploy
FasterTransformer
Our great sponsors
transformer-deploy | FasterTransformer | |
---|---|---|
8 | 7 | |
1,614 | 5,436 | |
0.7% | 3.4% | |
6.8 | 4.3 | |
6 months ago | 22 days ago | |
Python | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
transformer-deploy
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
-
[P] Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels
We work for Lefebvre Sarrut, a leading European legal publisher. Several of our products include transformer models in latency sensitive scenarios (search, content recommendation). So far, ONNX Runtime and TensorRT served us well, and we learned interesting patterns along the way that we shared with the community through an open-source library called transformer-deploy. However, recent changes in our environment made our needs evolve:
-
Convert Pegasus model to ONNX [Discussion]
here you will find a notebook for T5 on GPU with some tricks to make it fast: https://github.com/ELS-RD/transformer-deploy/blob/main/demo/generative-model/t5.ipynb
-
[P] What we learned by benchmarking TorchDynamo (PyTorch team), ONNX Runtime and TensorRT on transformers model (inference)
Check the notebook https://github.com/ELS-RD/transformer-deploy/blob/main/demo/TorchDynamo/benchmark.ipynb for detailed results, but what we will keep in mind:
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
notebook: https://github.com/ELS-RD/transformer-deploy/blob/main/demo/generative-model/t5.ipynb (Onnx Runtime only)
-
[P] 4.5 times faster Hugging Face transformer inference by modifying some Python AST
Regarding CPU inference, quantization is very easy, and supported by Transformer-deploy , however performance on transformer are very low outside corner cases (like no batch, very short sequence and distilled model), and last Intel generation CPU based instance like C6 or M6 on AWS are quite expensive compared to a cheap GPU like Nvidia T4, to say it otherwise, on transformer, until you are ok with slow inference and takes a small instance (for a PoC for instance), CPU inference is probably not a good idea.
-
[P] First ever tuto to perform *GPU* quantization on 🤗 Hugging Face transformer models -> 2X faster inference
The end to end tutorial: https://github.com/ELS-RD/transformer-deploy/blob/main/demo/quantization_end_to_end.ipynb
-
[P] Python library to optimize Hugging Face transformer for inference: < 0.5 ms latency / 2850 infer/sec
Want to try it 👉 https://github.com/ELS-RD/transformer-deploy
FasterTransformer
-
Train Your AI Model Once and Deploy on Any Cloud
https://docs.nvidia.com/ai-enterprise/overview/0.1.0/platfor...
RIVA: NVIDIA® Riva, a premium edition of NVIDIA AI Enterprise software, is a GPU-accelerated speech and translation AI SDK
FasterTransformer: https://github.com/NVIDIA/FasterTransformer an
-
Whether the ML computation engineering expertise will be valuable, is the question.
There could be some spectrum of this expertise. For instance, https://github.com/NVIDIA/FasterTransformer, https://github.com/microsoft/DeepSpeed
-
Optimized implementation of training/fine-tuning of LLMs [D]
Have anyone tried to optimize the forward and backward using custom Cuda code or fused kernel to speed up the training time of current LLMs? I only have seen FasterTransformer ( NVIDIA/FasterTransformer) and other similar tools but they're only focusing on inference.
-
Exploring Ghostwriter, a GitHub Copilot alternative
Replit built Ghostwriter on the open source scene based on Salesforce’s Codegen, using Nvidia’s FasterTransformer and Triton server for highly optimized decoders, and the knowledge distillation process of the CodeGen model from two billion parameters to a faster model of one billion parameters.
- Why are self attention not as deployment friendly?
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
Nvidia FasterTransformer is a mix of Pytorch and CUDA/C++ dedicated code. The performance boost is huge on T5, they report a 10X speedup like TensorRT. However, the speedup is computed on a translation task where sequences are 25 tokens long on average. In our experience, improvement on very short sequences tend to decrease by large margins on longer ones. Still we plan to dig deeper into this project as it implements very interesting ideas.
-
[P] Python library to optimize Hugging Face transformer for inference: < 0.5 ms latency / 2850 infer/sec
On the other side of the spectrum, there is Nvidia demos (here or there) showing us how to build manually a full Transformer graph (operator by operator) in TensorRT to get best performance from their hardware. It’s out of reach for many NLP practitioners and it’s time consuming to debug/maintain/adapt to a slightly different architecture (I tried). Plus, there is a secret: the very optimized model only works for specific sequence lengths and batch sizes. Truth is that, so far (and it will improve soon), it’s mainly for MLPerf benchmark (the one used to compare DL hardware), marketing content, and very specialized engineers.
What are some alternatives?
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
torch2trt - An easy to use PyTorch to TensorRT converter
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
parallelformers - Parallelformers: An Efficient Model Parallelization Toolkit for Deployment
OpenSeeFace - Robust realtime face and facial landmark tracking on CPU with Unity integration
wenet - Production First and Production Ready End-to-End Speech Recognition Toolkit
mmrazor - OpenMMLab Model Compression Toolbox and Benchmark.
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
sparsednn - Fast sparse deep learning on CPUs
optimum - 🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools