tensorrtllm_backend

The Triton TensorRT-LLM Backend (by triton-inference-server)

Tensorrtllm_backend Alternatives

Similar projects and alternatives to tensorrtllm_backend

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better tensorrtllm_backend alternative or higher similarity.

tensorrtllm_backend reviews and mentions

Posts with mentions or reviews of tensorrtllm_backend. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-08.
  • Ollama releases OpenAI API compatibility
    12 projects | news.ycombinator.com | 8 Feb 2024
    Nvidia Triton Inference Server with the TensorRT-LLM backend:

    https://github.com/triton-inference-server/tensorrtllm_backe...

    It’s used by Mistral, AWS, Cloudflare, and countless others.

    vLLM, HF TGI, Rayserve, etc are certainly viable but Triton has many truly unique and very powerful features (not to mention performance).

    100k DAU doesn’t mean much, you’d need to get a better understanding of the application, input tokens, generated output tokens, request rates, peaks, etc not to mention required time to first token, tokens per second, etc.

    Anyway, the point is Triton is just about the only thing out there for use in this general range and up.

  • MK1 Flywheel Unlocks the Full Potential of AMD Instinct for LLM Inference
    3 projects | news.ycombinator.com | 8 Jan 2024
    I support any progress to erode the Nvidia monopoly.

    That said from what I'm seeing here the free and open source (less other aspects of the CUDA stack, of course) TensorRT-LLM[0] almost certainly bests this implementation using the Nvidia hardware they reference for comparison.

    I don't have an A6000 but as an example with the tensorrt_llm backend for Nvidia Triton Inference Server (also free and open source) I get roughly 30 req/s with Mistral 7B on my RTX 4090 with significantly lower latency. Comparison benchmarks are tough, especially when published benchmarks like these are fairly scant on the real details.

    TensorRT-LLM has only been public for a few months and if you peruse the docs, PRs, etc you'll see they have many more optimizations in the works.

    In typical Nvidia fashion TensorRT-LLM runs on any Nvidia card (from laptop to datacenter) going back to Turing (five year old cards) assuming you have the VRAM.

    You can download and run this today, free and "open source" for these implementations at least. I'm extremely skeptical of the claim "MK1 Flywheel has the Best Throughput and Latency for LLM Inference on NVIDIA". You'll note they compare to vLLM, which is an excellent and incredible project but if you look at vLLM vs Triton w/ TensorRT-LLM the performance improvements are dramatic.

    Of course it's the latest and greatest ($$$$$$ and unobtanium) but one look at H100/H200 performance[3] and you can see what happens when the vendor has a robust software ecosystem to help sell their hardware. Pay the Nvidia tax on the frontend for the hardware, get it back as a dividend on the software.

    I feel like MK1 must be aware of TensorRT-LLM but of course those comparison benchmarks won't help sell their startup.

    [0] - https://github.com/NVIDIA/TensorRT-LLM

    [1] - https://github.com/triton-inference-server/tensorrtllm_backe...

    [2] - https://mkone.ai/blog/mk1-flywheel-race-tuned-and-track-read...

    [3] - https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source...

Stats

Basic tensorrtllm_backend repo stats
2
500
7.9
6 days ago

triton-inference-server/tensorrtllm_backend is an open source project licensed under Apache License 2.0 which is an OSI approved license.

The primary programming language of tensorrtllm_backend is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com