vllm VS TensorRT

Compare vllm vs TensorRT and see what are their differences.

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs (by vllm-project)

TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT. (by NVIDIA)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
vllm TensorRT
31 22
19,344 9,184
12.6% 2.6%
9.9 4.8
1 day ago 7 days ago
Python C++
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

vllm

Posts with mentions or reviews of vllm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-09.
  • AI leaderboards are no longer useful. It's time to switch to Pareto curves
    1 project | news.ycombinator.com | 30 Apr 2024
    I guess the root cause of my claim is that OpenAI won't tell us whether or not GPT-3.5 is an MoE model, and I assumed it wasn't. Since GPT-3.5 is clearly nondeterministic at temp=0, I believed the nondeterminism was due to FPU stuff, and this effect was amplified with GPT-4's MoE. But if GPT-3.5 is also MoE then that's just wrong.

    What makes this especially tricky is that small models are truly 100% deterministic at temp=0 because the relative likelihoods are too coarse for FPU issues to be a factor. I had thought 3.5 was big enough that some of its token probabilities were too fine-grained for the FPU. But that's probably wrong.

    On the other hand, it's not just GPT, there are currently floating-point difficulties in vllm which significantly affect the determinism of any model run on it: https://github.com/vllm-project/vllm/issues/966 Note that a suggested fix is upcasting to float32. So it's possible that GPT-3.5 is using an especially low-precision float and introducing nondeterminism by saving money on compute costs.

    Sadly I do not have the money[1] to actually run a test to falsify any of this. It seems like this would be a good little research project.

    [1] Or the time, or the motivation :) But this stuff is expensive.

  • Mistral AI Launches New 8x22B Moe Model
    4 projects | news.ycombinator.com | 9 Apr 2024
    The easiest is to use vllm (https://github.com/vllm-project/vllm) to run it on a Couple of A100's, and you can benchmark this using this library (https://github.com/EleutherAI/lm-evaluation-harness)
  • FLaNK AI for 11 March 2024
    46 projects | dev.to | 11 Mar 2024
  • Show HN: We got fine-tuning Mistral-7B to not suck
    4 projects | news.ycombinator.com | 7 Feb 2024
    Great question! scheduling workloads onto GPUs in a way where VRAM is being utilised efficiently was quite the challenge.

    What we found was the IO latency for loading model weights into VRAM will kill responsiveness if you don't "re-use" sessions (i.e. where the model weights remain loaded and you run multiple inference sessions over the same loaded weights).

    Obviously projects like https://github.com/vllm-project/vllm exist but we needed to build out a scheduler that can run a fleet of GPUs for a matrix of text/image vs inference/finetune sessions.

    disclaimer: I work on Helix

  • Mistral CEO confirms 'leak' of new open source AI model nearing GPT4 performance
    5 projects | news.ycombinator.com | 31 Jan 2024
    FYI, vLLM also just added experiment multi-lora support: https://github.com/vllm-project/vllm/releases/tag/v0.3.0

    Also check out the new prefix caching, I see huge potential for batch processing purposes there!

  • VLLM Sacrifices Accuracy for Speed
    1 project | news.ycombinator.com | 23 Jan 2024
  • Easy, fast, and cheap LLM serving for everyone
    1 project | news.ycombinator.com | 17 Dec 2023
  • vllm
    1 project | news.ycombinator.com | 15 Dec 2023
  • Mixtral Expert Parallelism
    1 project | news.ycombinator.com | 15 Dec 2023
  • Mixtral 8x7B Support
    1 project | news.ycombinator.com | 11 Dec 2023

TensorRT

Posts with mentions or reviews of TensorRT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-26.
  • AMD MI300X 30% higher performance than Nvidia H100, even with optimized stack
    1 project | news.ycombinator.com | 17 Dec 2023
    > It's not rocket science to implement matrix multiplication in any GPU.

    You're right, it's harder. Saying this as someone who's done more work on the former than the latter. (I have, with a team, built a rocket engine. And not your school or backyard project size, but nozzle bigger than your face kind. I've also written CUDA kernels and boy is there a big learning curve to the latter that you gotta fundamentally rethink how you view a problem. It's unquestionable why CUDA devs are paid so much. Really it's only questionable why they aren't paid more)

    I know it is easy to think this problem is easy, it really looks that way. But there's an incredible amount of optimization that goes into all of this and that's what's really hard. You aren't going to get away with just N for loops for a tensor rank N. You got to chop the data up, be intelligent about it, manage memory, how you load memory, handle many data types, take into consideration different results for different FMA operations, and a whole lot more. There's a whole lot of non-obvious things that result in high optimization (maybe obvious __after__ the fact, but that's not truthfully "obvious"). The thing is, the space is so well researched and implemented that you can't get away with naive implementations, you have to be on the bleeding edge.

    Then you have to do that and make it reasonably usable for the programmer too, abstracting away all of that. Cuda also has a huge head start and momentum is not a force to be reckoned with (pun intended).

    Look at TensorRT[0]. The software isn't even complete and it still isn't going to cover all neural networks on all GPUs. I've had stuff work on a V100 and H100 but not an A100, then later get fixed. They even have the "Apple Advantage" in that they have control of the hardware. I'm not certain AMD will have the same advantage. We talk a lot about the difficulties of being first mover, but I think we can also recognize that momentum is an advantage of being first mover. And it isn't one to scoff at.

    [0] https://github.com/NVIDIA/TensorRT

  • Getting SDXL-turbo running with tensorRT
    1 project | /r/StableDiffusion | 6 Dec 2023
    (python demo_txt2img.py "a beautiful photograph of Mt. Fuji during cherry blossom"). https://github.com/NVIDIA/TensorRT/tree/release/8.6/demo/Diffusion
  • Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
    14 projects | news.ycombinator.com | 26 Sep 2023
    - https://github.com/NVIDIA/TensorRT

    TVM and other compiler-based approaches seem to really perform really well and make supporting different backends really easy. A good friend who's been in this space for a while told me llama.cpp is sort of a "hand crafted" version of what these compilers could output, which I think speaks to the craftmanship Georgi and the ggml team have put into llama.cpp, but also the opportunity to "compile" versions of llama.cpp for other model architectures or platforms.

  • Nvidia Introduces TensorRT-LLM for Accelerating LLM Inference on H100/A100 GPUs
    3 projects | news.ycombinator.com | 8 Sep 2023
    https://github.com/NVIDIA/TensorRT/issues/982

    Maybe? Looks like tensorRT does work, but I couldn't find much.

  • Train Your AI Model Once and Deploy on Any Cloud
    3 projects | news.ycombinator.com | 8 Jul 2023
    highly optimized transformer-based encoder and decoder component, supported on pytorch, tensorflow and triton

    TensorRT, custom ml framework/ inference runtime from nvidia, https://developer.nvidia.com/tensorrt, but you have to port your models

  • A1111 just added support for TensorRT for webui as an extension!
    5 projects | /r/StableDiffusion | 27 May 2023
  • WIP - TensorRT accelerated stable diffusion img2img from mobile camera over webrtc + whisper speech to text. Interdimensional cable is here! Code: https://github.com/venetanji/videosd
    3 projects | /r/StableDiffusion | 21 Feb 2023
    It uses the nvidia demo code from: https://github.com/NVIDIA/TensorRT/tree/main/demo/Diffusion
  • [P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
    7 projects | /r/MachineLearning | 8 Feb 2023
    The traditional way to deploy a model is to export it to Onnx, then to TensorRT plan format. Each step requires its own tooling, its own mental model, and may raise some issues. The most annoying thing is that you need Microsoft or Nvidia support to get the best performances, and sometimes model support takes time. For instance, T5, a model released in 2019, is not yet correctly supported on TensorRT, in particular K/V cache is missing (soon it will be according to TensorRT maintainers, but I wrote the very same thing almost 1 year ago and then 4 months ago so… I don’t know).
  • Speeding up T5
    2 projects | /r/LanguageTechnology | 22 Jan 2023
    I've tried to speed it up with TensorRT and followed this example: https://github.com/NVIDIA/TensorRT/blob/main/demo/HuggingFace/notebooks/t5.ipynb - it does give considerable speedup for batch-size=1 but it does not work with bigger batch sizes, which is useless as I can simply increase the batch-size of HuggingFace model.
  • demoDiffusion on TensorRT - supports 3090, 4090, and A100
    1 project | /r/StableDiffusion | 10 Dec 2022

What are some alternatives?

When comparing vllm and TensorRT you can also consider the following projects:

CTranslate2 - Fast inference engine for Transformer models

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

lmdeploy - LMDeploy is a toolkit for compressing, deploying, and serving LLMs.

FasterTransformer - Transformer related optimization, including BERT, GPT

Llama-2-Onnx

onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX

tritony - Tiny configuration for Triton Inference Server

openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference

faster-whisper - Faster Whisper transcription with CTranslate2

stable-diffusion-webui - Stable Diffusion web UI

text-generation-inference - Large Language Model Text Generation Inference

flash-attention - Fast and memory-efficient exact attention