tensorrtllm_backend VS model_navigator

Compare tensorrtllm_backend vs model_navigator and see what are their differences.

tensorrtllm_backend

The Triton TensorRT-LLM Backend (by triton-inference-server)

model_navigator

Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs. (by triton-inference-server)
Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
tensorrtllm_backend model_navigator
3 1
530 161
9.8% 3.1%
7.9 8.9
5 days ago 25 days ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

tensorrtllm_backend

Posts with mentions or reviews of tensorrtllm_backend. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-08.
  • Ollama releases OpenAI API compatibility
    12 projects | news.ycombinator.com | 8 Feb 2024
    Nvidia Triton Inference Server with the TensorRT-LLM backend:

    https://github.com/triton-inference-server/tensorrtllm_backe...

    It’s used by Mistral, AWS, Cloudflare, and countless others.

    vLLM, HF TGI, Rayserve, etc are certainly viable but Triton has many truly unique and very powerful features (not to mention performance).

    100k DAU doesn’t mean much, you’d need to get a better understanding of the application, input tokens, generated output tokens, request rates, peaks, etc not to mention required time to first token, tokens per second, etc.

    Anyway, the point is Triton is just about the only thing out there for use in this general range and up.

  • MK1 Flywheel Unlocks the Full Potential of AMD Instinct for LLM Inference
    3 projects | news.ycombinator.com | 8 Jan 2024
    I support any progress to erode the Nvidia monopoly.

    That said from what I'm seeing here the free and open source (less other aspects of the CUDA stack, of course) TensorRT-LLM[0] almost certainly bests this implementation using the Nvidia hardware they reference for comparison.

    I don't have an A6000 but as an example with the tensorrt_llm backend for Nvidia Triton Inference Server (also free and open source) I get roughly 30 req/s with Mistral 7B on my RTX 4090 with significantly lower latency. Comparison benchmarks are tough, especially when published benchmarks like these are fairly scant on the real details.

    TensorRT-LLM has only been public for a few months and if you peruse the docs, PRs, etc you'll see they have many more optimizations in the works.

    In typical Nvidia fashion TensorRT-LLM runs on any Nvidia card (from laptop to datacenter) going back to Turing (five year old cards) assuming you have the VRAM.

    You can download and run this today, free and "open source" for these implementations at least. I'm extremely skeptical of the claim "MK1 Flywheel has the Best Throughput and Latency for LLM Inference on NVIDIA". You'll note they compare to vLLM, which is an excellent and incredible project but if you look at vLLM vs Triton w/ TensorRT-LLM the performance improvements are dramatic.

    Of course it's the latest and greatest ($$$$$$ and unobtanium) but one look at H100/H200 performance[3] and you can see what happens when the vendor has a robust software ecosystem to help sell their hardware. Pay the Nvidia tax on the frontend for the hardware, get it back as a dividend on the software.

    I feel like MK1 must be aware of TensorRT-LLM but of course those comparison benchmarks won't help sell their startup.

    [0] - https://github.com/NVIDIA/TensorRT-LLM

    [1] - https://github.com/triton-inference-server/tensorrtllm_backe...

    [2] - https://mkone.ai/blog/mk1-flywheel-race-tuned-and-track-read...

    [3] - https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source...

model_navigator

Posts with mentions or reviews of model_navigator. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-08.
  • Ollama releases OpenAI API compatibility
    12 projects | news.ycombinator.com | 8 Feb 2024
    - While keeping power utilization below X

    They will take the exported model and dynamically deploy the package to a triton instance running on your actual inference serving hardware, then generate requests to meet your SLAs to come up with the optimal model configuration. You even get exported metrics and pretty reports for every configuration used/attempted. You can take the same exported package, change the SLA params, and it will automatically re-generate the configuration for you.

    - Performance on a completely different level. TensorRT-LLM especially is extremely new and very early but already at high scale you can start to see > 10k RPS on a single node.

    - gRPC support. Especially when using pre/post processing, ensemble, etc you can configure clients programmatically to use the individual models or the ensemble chain (as one example). This opens up a very wide range of powerful architecture options that simply aren't available anywhere else. gRPC could probably be thought of as AsyncLLMEngine, it can abstract actual input/output or expose raw in/out so models, tokenizers, decoders, etc can send/receive raw data/numpy/tensors.

    - DALI support[5]. Combined with everything above, you can add DALI in the processing chain to do things like take input image/audio/etc, copy to GPU once, GPU accelerate scaling/conversion/resampling/whatever, and get output.

    vLLM and HF TGI are very cool and I use them in certain cases. The fact you can give them a HF model and they just fire up with a single command and offer good performance is very impressive but there are an untold number of reasons these providers use Triton. It's in a class of its own.

    [0] - https://mistral.ai/news/la-plateforme/

    [1] - https://www.cloudflare.com/press-releases/2023/cloudflare-po...

    [2] - https://www.nvidia.com/en-us/case-studies/amazon-accelerates...

    [3] - https://github.com/triton-inference-server/model_navigator

    [4] - https://github.com/triton-inference-server/client/blob/main/...

    [5] - https://github.com/triton-inference-server/dali_backend

What are some alternatives?

When comparing tensorrtllm_backend and model_navigator you can also consider the following projects:

YetAnotherChatUI - Yet another ChatGPT UI. Bring your own API key.

dali_backend - The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.

lookma - LookMa connects Android devices to locally-run LLMs

tensorrtllm_backe

llama.cpp - LLM inference in C/C++

llamafile - Distribute and run LLMs with a single file.

client - Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.