tensorrtllm_backend VS llama.cpp

Compare tensorrtllm_backend vs llama.cpp and see what are their differences.

tensorrtllm_backend

The Triton TensorRT-LLM Backend (by triton-inference-server)

llama.cpp

LLM inference in C/C++ (by ggerganov)
Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
tensorrtllm_backend llama.cpp
3 792
551 60,282
13.2% -
8.0 10.0
10 days ago about 5 hours ago
Python C++
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

tensorrtllm_backend

Posts with mentions or reviews of tensorrtllm_backend. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-25.
  • How fast can one reasonably expect to get inference on a ~70B model?
    2 projects | news.ycombinator.com | 25 May 2024
    TensorRT-LLM with Triton Inference Server is the fastest in Nvidia land.

    https://github.com/triton-inference-server/tensorrtllm_backe...

  • Ollama releases OpenAI API compatibility
    12 projects | news.ycombinator.com | 8 Feb 2024
    Nvidia Triton Inference Server with the TensorRT-LLM backend:

    https://github.com/triton-inference-server/tensorrtllm_backe...

    It’s used by Mistral, AWS, Cloudflare, and countless others.

    vLLM, HF TGI, Rayserve, etc are certainly viable but Triton has many truly unique and very powerful features (not to mention performance).

    100k DAU doesn’t mean much, you’d need to get a better understanding of the application, input tokens, generated output tokens, request rates, peaks, etc not to mention required time to first token, tokens per second, etc.

    Anyway, the point is Triton is just about the only thing out there for use in this general range and up.

  • MK1 Flywheel Unlocks the Full Potential of AMD Instinct for LLM Inference
    3 projects | news.ycombinator.com | 8 Jan 2024
    I support any progress to erode the Nvidia monopoly.

    That said from what I'm seeing here the free and open source (less other aspects of the CUDA stack, of course) TensorRT-LLM[0] almost certainly bests this implementation using the Nvidia hardware they reference for comparison.

    I don't have an A6000 but as an example with the tensorrt_llm backend for Nvidia Triton Inference Server (also free and open source) I get roughly 30 req/s with Mistral 7B on my RTX 4090 with significantly lower latency. Comparison benchmarks are tough, especially when published benchmarks like these are fairly scant on the real details.

    TensorRT-LLM has only been public for a few months and if you peruse the docs, PRs, etc you'll see they have many more optimizations in the works.

    In typical Nvidia fashion TensorRT-LLM runs on any Nvidia card (from laptop to datacenter) going back to Turing (five year old cards) assuming you have the VRAM.

    You can download and run this today, free and "open source" for these implementations at least. I'm extremely skeptical of the claim "MK1 Flywheel has the Best Throughput and Latency for LLM Inference on NVIDIA". You'll note they compare to vLLM, which is an excellent and incredible project but if you look at vLLM vs Triton w/ TensorRT-LLM the performance improvements are dramatic.

    Of course it's the latest and greatest ($$$$$$ and unobtanium) but one look at H100/H200 performance[3] and you can see what happens when the vendor has a robust software ecosystem to help sell their hardware. Pay the Nvidia tax on the frontend for the hardware, get it back as a dividend on the software.

    I feel like MK1 must be aware of TensorRT-LLM but of course those comparison benchmarks won't help sell their startup.

    [0] - https://github.com/NVIDIA/TensorRT-LLM

    [1] - https://github.com/triton-inference-server/tensorrtllm_backe...

    [2] - https://mkone.ai/blog/mk1-flywheel-race-tuned-and-track-read...

    [3] - https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source...

llama.cpp

Posts with mentions or reviews of llama.cpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-06-10.
  • Apple Intelligence, the personal intelligence system
    4 projects | news.ycombinator.com | 10 Jun 2024
    > Doing everything on-device would result in a horrible user experience. They might as well not participate in this generative AI rush at all if they hoped to keep it on-device.

    On the contrary, I'm shocked over the last few months how "on device" on a Macbook Pro or Mac Studio competes plausibly with last year's early GPT-4, leveraging Llama 3 70b or Qwen2 72b.

    There are surprisingly few things you "need" 128GB of so-called "unified RAM" for, but with M-series processors and the memory bandwidth, this is a use case that shines.

    From this thread covering performance of llama.cpp on Apple Silicon M-series …

    https://github.com/ggerganov/llama.cpp/discussions/4167

    "Buy as much memory as you can afford would be my bottom line!"

  • Partial Outage on Claude.ai
    1 project | news.ycombinator.com | 4 Jun 2024
    I'd love to use local models, but seems like most of the easy to use software out there (LM Studio, Backyard AI, koboldcpp) doesn't really play all that nicely with my Intel Arc GPU and it's painfully slow on my Ryzen 5 4500. Even my M1 MacBook isn't that fast at generating text with even 7B models.

    I wonder if llama.cpp with SYCL could help, will have to try it out: https://github.com/ggerganov/llama.cpp/blob/master/README-sy...

    But even if that worked, I'd still have the problem that IDEs and whatever else I have open already eats most of the 32 GB of RAM my desktop PC has. Whereas if I ran a small code model on the MacBook and connected to it through my PC, it'd still probably be too slow for autocomplete, when compared to GitHub Copilot and less accurate than ChatGPT or Phind for most stuff.

  • Why YC Went to DC
    3 projects | news.ycombinator.com | 3 Jun 2024
    You're correct if you're focused exclusively on the work surrounding building foundation models to begin with. But if you take a broader view, having open models that we can legally fine tune and hack with locally has created a large and ever-growing community of builders and innovators that could not exist without these open models. Just take a look at projects like InvokeAI [0] in the image space or especially llama.cpp [1] in the text generation space. These projects are large, have lots of contributors, move very fast, and drive a lot of innovation and collaboration in applying AI to various domains in a way that simply wouldn't be possible without the open models.

    [0] https://github.com/invoke-ai/InvokeAI

    [1] https://github.com/ggerganov/llama.cpp

  • Show HN: Open-Source Load Balancer for Llama.cpp
    6 projects | news.ycombinator.com | 1 Jun 2024
  • RAG with llama.cpp and external API services
    2 projects | dev.to | 31 May 2024
    The first example will build an Embeddings database backed by llama.cpp vectorization.
  • Ask HN: I have many PDFs – what is the best local way to leverage AI for search?
    10 projects | news.ycombinator.com | 30 May 2024
    and at some point (https://github.com/ggerganov/llama.cpp/issues/7444)
  • Deploying llama.cpp on AWS (with Troubleshooting)
    1 project | dev.to | 28 May 2024
    git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp LLAMA_CUDA=1 make -j
  • Devoxx Genie Plugin : an Update
    6 projects | dev.to | 28 May 2024
    I focused on supporting Ollama, GPT4All, and LMStudio, all of which run smoothly on a Mac computer. Many of these tools are user-friendly wrappers around Llama.cpp, allowing easy model downloads and providing a REST interface to query the available models. Last week, I also added "👋🏼 Jan" support because HuggingFace has endorsed this provider out-of-the-box.
  • Mistral Fine-Tune
    2 projects | news.ycombinator.com | 25 May 2024
    The output of the LLM is not just one token, but a statistical distribution across all possible output tokens. The tool you use to generate output will sample from this distribution with various techniques, and you can put constraints on it like not being too repetitive. Some of them support getting very specific about the allowed output format, e.g. https://github.com/ggerganov/llama.cpp/blob/master/grammars/... So even if the LLM says that an invalid token is the most likely next token, the tool will never select it for output. It will only sample from valid tokens.
  • Distributed LLM Inference with Llama.cpp
    1 project | news.ycombinator.com | 24 May 2024

What are some alternatives?

When comparing tensorrtllm_backend and llama.cpp you can also consider the following projects:

YetAnotherChatUI - Yet another ChatGPT UI. Bring your own API key.

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

model_navigator - Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.

gpt4all - gpt4all: run open-source LLMs anywhere

dali_backend - The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

ggml - Tensor library for machine learning

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧

ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured