Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR. Learn more →
Transformer-deploy Alternatives
Similar projects and alternatives to transformer-deploy
-
-
Nutrient
Nutrient - The #1 PDF SDK Library. Bad PDFs = bad UX. Slow load times, broken annotations, clunky UX frustrates users. Nutrient’s PDF SDKs gives seamless document experiences, fast rendering, annotations, real-time collaboration, 100+ features. Used by 10K+ devs, serving ~half a billion users worldwide. Explore the SDK for free.
-
diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
-
-
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)
-
TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
-
-
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
BentoML
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
-
-
optimum
🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools
-
-
-
-
-
kernl
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
-
-
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
transformer-deploy discussion
transformer-deploy reviews and mentions
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
-
[P] Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels
We work for Lefebvre Sarrut, a leading European legal publisher. Several of our products include transformer models in latency sensitive scenarios (search, content recommendation). So far, ONNX Runtime and TensorRT served us well, and we learned interesting patterns along the way that we shared with the community through an open-source library called transformer-deploy. However, recent changes in our environment made our needs evolve:
-
Convert Pegasus model to ONNX [Discussion]
here you will find a notebook for T5 on GPU with some tricks to make it fast: https://github.com/ELS-RD/transformer-deploy/blob/main/demo/generative-model/t5.ipynb
-
[P] What we learned by benchmarking TorchDynamo (PyTorch team), ONNX Runtime and TensorRT on transformers model (inference)
Check the notebook https://github.com/ELS-RD/transformer-deploy/blob/main/demo/TorchDynamo/benchmark.ipynb for detailed results, but what we will keep in mind:
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
notebook: https://github.com/ELS-RD/transformer-deploy/blob/main/demo/generative-model/t5.ipynb (Onnx Runtime only)
-
[P] 4.5 times faster Hugging Face transformer inference by modifying some Python AST
Regarding CPU inference, quantization is very easy, and supported by Transformer-deploy , however performance on transformer are very low outside corner cases (like no batch, very short sequence and distilled model), and last Intel generation CPU based instance like C6 or M6 on AWS are quite expensive compared to a cheap GPU like Nvidia T4, to say it otherwise, on transformer, until you are ok with slow inference and takes a small instance (for a PoC for instance), CPU inference is probably not a good idea.
-
[P] First ever tuto to perform *GPU* quantization on 🤗 Hugging Face transformer models -> 2X faster inference
The end to end tutorial: https://github.com/ELS-RD/transformer-deploy/blob/main/demo/quantization_end_to_end.ipynb
-
[P] Python library to optimize Hugging Face transformer for inference: < 0.5 ms latency / 2850 infer/sec
Want to try it 👉 https://github.com/ELS-RD/transformer-deploy
-
A note from our sponsor - CodeRabbit
coderabbit.ai | 18 Feb 2025
Stats
ELS-RD/transformer-deploy is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of transformer-deploy is Python.
Popular Comparisons
- transformer-deploy VS FasterTransformer
- transformer-deploy VS TensorRT
- transformer-deploy VS fastT5
- transformer-deploy VS torch2trt
- transformer-deploy VS optimum
- transformer-deploy VS TensorRT
- transformer-deploy VS mmrazor
- transformer-deploy VS OpenSeeFace
- transformer-deploy VS parallelformers
- transformer-deploy VS sparsednn