TensorRT VS llama.cpp

Compare TensorRT vs llama.cpp and see what are their differences.

TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT. (by NVIDIA)

llama.cpp

LLM inference in C/C++ (by ggerganov)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
TensorRT llama.cpp
22 769
9,065 55,846
4.0% -
5.0 10.0
13 days ago 7 days ago
C++ C++
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

TensorRT

Posts with mentions or reviews of TensorRT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-26.
  • AMD MI300X 30% higher performance than Nvidia H100, even with optimized stack
    1 project | news.ycombinator.com | 17 Dec 2023
    > It's not rocket science to implement matrix multiplication in any GPU.

    You're right, it's harder. Saying this as someone who's done more work on the former than the latter. (I have, with a team, built a rocket engine. And not your school or backyard project size, but nozzle bigger than your face kind. I've also written CUDA kernels and boy is there a big learning curve to the latter that you gotta fundamentally rethink how you view a problem. It's unquestionable why CUDA devs are paid so much. Really it's only questionable why they aren't paid more)

    I know it is easy to think this problem is easy, it really looks that way. But there's an incredible amount of optimization that goes into all of this and that's what's really hard. You aren't going to get away with just N for loops for a tensor rank N. You got to chop the data up, be intelligent about it, manage memory, how you load memory, handle many data types, take into consideration different results for different FMA operations, and a whole lot more. There's a whole lot of non-obvious things that result in high optimization (maybe obvious __after__ the fact, but that's not truthfully "obvious"). The thing is, the space is so well researched and implemented that you can't get away with naive implementations, you have to be on the bleeding edge.

    Then you have to do that and make it reasonably usable for the programmer too, abstracting away all of that. Cuda also has a huge head start and momentum is not a force to be reckoned with (pun intended).

    Look at TensorRT[0]. The software isn't even complete and it still isn't going to cover all neural networks on all GPUs. I've had stuff work on a V100 and H100 but not an A100, then later get fixed. They even have the "Apple Advantage" in that they have control of the hardware. I'm not certain AMD will have the same advantage. We talk a lot about the difficulties of being first mover, but I think we can also recognize that momentum is an advantage of being first mover. And it isn't one to scoff at.

    [0] https://github.com/NVIDIA/TensorRT

  • Getting SDXL-turbo running with tensorRT
    1 project | /r/StableDiffusion | 6 Dec 2023
    (python demo_txt2img.py "a beautiful photograph of Mt. Fuji during cherry blossom"). https://github.com/NVIDIA/TensorRT/tree/release/8.6/demo/Diffusion
  • Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
    14 projects | news.ycombinator.com | 26 Sep 2023
    - https://github.com/NVIDIA/TensorRT

    TVM and other compiler-based approaches seem to really perform really well and make supporting different backends really easy. A good friend who's been in this space for a while told me llama.cpp is sort of a "hand crafted" version of what these compilers could output, which I think speaks to the craftmanship Georgi and the ggml team have put into llama.cpp, but also the opportunity to "compile" versions of llama.cpp for other model architectures or platforms.

  • Nvidia Introduces TensorRT-LLM for Accelerating LLM Inference on H100/A100 GPUs
    3 projects | news.ycombinator.com | 8 Sep 2023
    https://github.com/NVIDIA/TensorRT/issues/982

    Maybe? Looks like tensorRT does work, but I couldn't find much.

  • Train Your AI Model Once and Deploy on Any Cloud
    3 projects | news.ycombinator.com | 8 Jul 2023
    highly optimized transformer-based encoder and decoder component, supported on pytorch, tensorflow and triton

    TensorRT, custom ml framework/ inference runtime from nvidia, https://developer.nvidia.com/tensorrt, but you have to port your models

  • A1111 just added support for TensorRT for webui as an extension!
    5 projects | /r/StableDiffusion | 27 May 2023
  • WIP - TensorRT accelerated stable diffusion img2img from mobile camera over webrtc + whisper speech to text. Interdimensional cable is here! Code: https://github.com/venetanji/videosd
    3 projects | /r/StableDiffusion | 21 Feb 2023
    It uses the nvidia demo code from: https://github.com/NVIDIA/TensorRT/tree/main/demo/Diffusion
  • [P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
    7 projects | /r/MachineLearning | 8 Feb 2023
    The traditional way to deploy a model is to export it to Onnx, then to TensorRT plan format. Each step requires its own tooling, its own mental model, and may raise some issues. The most annoying thing is that you need Microsoft or Nvidia support to get the best performances, and sometimes model support takes time. For instance, T5, a model released in 2019, is not yet correctly supported on TensorRT, in particular K/V cache is missing (soon it will be according to TensorRT maintainers, but I wrote the very same thing almost 1 year ago and then 4 months ago so… I don’t know).
  • Speeding up T5
    2 projects | /r/LanguageTechnology | 22 Jan 2023
    I've tried to speed it up with TensorRT and followed this example: https://github.com/NVIDIA/TensorRT/blob/main/demo/HuggingFace/notebooks/t5.ipynb - it does give considerable speedup for batch-size=1 but it does not work with bigger batch sizes, which is useless as I can simply increase the batch-size of HuggingFace model.
  • demoDiffusion on TensorRT - supports 3090, 4090, and A100
    1 project | /r/StableDiffusion | 10 Dec 2022

llama.cpp

Posts with mentions or reviews of llama.cpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-21.
  • Phi-3 Weights Released
    1 project | news.ycombinator.com | 23 Apr 2024
    well https://github.com/ggerganov/llama.cpp/issues/6849
  • Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
    3 projects | news.ycombinator.com | 21 Apr 2024
  • Llama.cpp Working on Support for Llama3
    1 project | news.ycombinator.com | 18 Apr 2024
  • Embeddings are a good starting point for the AI curious app developer
    7 projects | news.ycombinator.com | 17 Apr 2024
    Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)

    Running an embedding server locally is pretty straightforward:

    - Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases

  • Mixtral 8x22B
    4 projects | news.ycombinator.com | 17 Apr 2024
  • Llama.cpp: Improve CPU prompt eval speed
    1 project | news.ycombinator.com | 17 Apr 2024
  • Ollama 0.1.32: WizardLM 2, Mixtral 8x22B, macOS CPU/GPU model split
    9 projects | news.ycombinator.com | 17 Apr 2024
    Ah, thanks for this! I can't edit my parent comment that you replied to any longer unfortunately.

    As I said, I only compared the contributors graphs [0] and checked for overlaps. But those apparently only go back about year and only list at most 100 contributors ranked by number of commits.

    [0]: https://github.com/ollama/ollama/graphs/contributors and https://github.com/ggerganov/llama.cpp/graphs/contributors

  • KodiBot - Local Chatbot App for Desktop
    2 projects | dev.to | 11 Apr 2024
    KodiBot is a desktop app that enables users to run their own AI chat assistants locally and offline on Windows, Mac, and Linux operating systems. KodiBot is a standalone app and does not require an internet connection or additional dependencies to run local chat assistants. It supports both Llama.cpp compatible models and OpenAI API.
  • Mixture-of-Depths: Dynamically allocating compute in transformers
    3 projects | news.ycombinator.com | 8 Apr 2024
    There are already some implementations out there which attempt to accomplish this!

    Here's an example: https://github.com/silphendio/sliced_llama

    A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...

    Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275

    And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...

  • The lifecycle of a code AI completion
    6 projects | news.ycombinator.com | 7 Apr 2024
    For those who might not be aware of this, there is also an open source project on GitHub called "Twinny" which is an offline Visual Studio Code plugin equivalent to Copilot: https://github.com/rjmacarthy/twinny

    It can be used with a number of local model services. Currently for my setup on a NVIDIA 4090, I'm running both the base and instruct model for deepseek-coder 6.7b using 5_K_M Quantization GGUF files (for performance) through llama.cpp "server" where the base model is for completions and the instruct model for chat interactions.

    llama.cpp: https://github.com/ggerganov/llama.cpp/

    deepseek-coder 6.7b base GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-GGU...

    deepseek-coder 6.7b instruct GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct...

What are some alternatives?

When comparing TensorRT and llama.cpp you can also consider the following projects:

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

FasterTransformer - Transformer related optimization, including BERT, GPT

gpt4all - gpt4all: run open-source LLMs anywhere

onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

vllm - A high-throughput and memory-efficient inference and serving engine for LLMs

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference

ggml - Tensor library for machine learning

stable-diffusion-webui - Stable Diffusion web UI

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM