TensorRT VS flash-attention

Compare TensorRT vs flash-attention and see what are their differences.

TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT. (by NVIDIA)

flash-attention

Fast and memory-efficient exact attention (by Dao-AILab)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
TensorRT flash-attention
22 26
9,065 10,642
4.0% 8.5%
5.0 9.4
9 days ago 15 days ago
C++ Python
Apache License 2.0 BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

TensorRT

Posts with mentions or reviews of TensorRT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-26.
  • AMD MI300X 30% higher performance than Nvidia H100, even with optimized stack
    1 project | news.ycombinator.com | 17 Dec 2023
    > It's not rocket science to implement matrix multiplication in any GPU.

    You're right, it's harder. Saying this as someone who's done more work on the former than the latter. (I have, with a team, built a rocket engine. And not your school or backyard project size, but nozzle bigger than your face kind. I've also written CUDA kernels and boy is there a big learning curve to the latter that you gotta fundamentally rethink how you view a problem. It's unquestionable why CUDA devs are paid so much. Really it's only questionable why they aren't paid more)

    I know it is easy to think this problem is easy, it really looks that way. But there's an incredible amount of optimization that goes into all of this and that's what's really hard. You aren't going to get away with just N for loops for a tensor rank N. You got to chop the data up, be intelligent about it, manage memory, how you load memory, handle many data types, take into consideration different results for different FMA operations, and a whole lot more. There's a whole lot of non-obvious things that result in high optimization (maybe obvious __after__ the fact, but that's not truthfully "obvious"). The thing is, the space is so well researched and implemented that you can't get away with naive implementations, you have to be on the bleeding edge.

    Then you have to do that and make it reasonably usable for the programmer too, abstracting away all of that. Cuda also has a huge head start and momentum is not a force to be reckoned with (pun intended).

    Look at TensorRT[0]. The software isn't even complete and it still isn't going to cover all neural networks on all GPUs. I've had stuff work on a V100 and H100 but not an A100, then later get fixed. They even have the "Apple Advantage" in that they have control of the hardware. I'm not certain AMD will have the same advantage. We talk a lot about the difficulties of being first mover, but I think we can also recognize that momentum is an advantage of being first mover. And it isn't one to scoff at.

    [0] https://github.com/NVIDIA/TensorRT

  • Getting SDXL-turbo running with tensorRT
    1 project | /r/StableDiffusion | 6 Dec 2023
    (python demo_txt2img.py "a beautiful photograph of Mt. Fuji during cherry blossom"). https://github.com/NVIDIA/TensorRT/tree/release/8.6/demo/Diffusion
  • Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
    14 projects | news.ycombinator.com | 26 Sep 2023
    - https://github.com/NVIDIA/TensorRT

    TVM and other compiler-based approaches seem to really perform really well and make supporting different backends really easy. A good friend who's been in this space for a while told me llama.cpp is sort of a "hand crafted" version of what these compilers could output, which I think speaks to the craftmanship Georgi and the ggml team have put into llama.cpp, but also the opportunity to "compile" versions of llama.cpp for other model architectures or platforms.

  • Nvidia Introduces TensorRT-LLM for Accelerating LLM Inference on H100/A100 GPUs
    3 projects | news.ycombinator.com | 8 Sep 2023
    https://github.com/NVIDIA/TensorRT/issues/982

    Maybe? Looks like tensorRT does work, but I couldn't find much.

  • Train Your AI Model Once and Deploy on Any Cloud
    3 projects | news.ycombinator.com | 8 Jul 2023
    highly optimized transformer-based encoder and decoder component, supported on pytorch, tensorflow and triton

    TensorRT, custom ml framework/ inference runtime from nvidia, https://developer.nvidia.com/tensorrt, but you have to port your models

  • A1111 just added support for TensorRT for webui as an extension!
    5 projects | /r/StableDiffusion | 27 May 2023
  • WIP - TensorRT accelerated stable diffusion img2img from mobile camera over webrtc + whisper speech to text. Interdimensional cable is here! Code: https://github.com/venetanji/videosd
    3 projects | /r/StableDiffusion | 21 Feb 2023
    It uses the nvidia demo code from: https://github.com/NVIDIA/TensorRT/tree/main/demo/Diffusion
  • [P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
    7 projects | /r/MachineLearning | 8 Feb 2023
    The traditional way to deploy a model is to export it to Onnx, then to TensorRT plan format. Each step requires its own tooling, its own mental model, and may raise some issues. The most annoying thing is that you need Microsoft or Nvidia support to get the best performances, and sometimes model support takes time. For instance, T5, a model released in 2019, is not yet correctly supported on TensorRT, in particular K/V cache is missing (soon it will be according to TensorRT maintainers, but I wrote the very same thing almost 1 year ago and then 4 months ago so… I don’t know).
  • Speeding up T5
    2 projects | /r/LanguageTechnology | 22 Jan 2023
    I've tried to speed it up with TensorRT and followed this example: https://github.com/NVIDIA/TensorRT/blob/main/demo/HuggingFace/notebooks/t5.ipynb - it does give considerable speedup for batch-size=1 but it does not work with bigger batch sizes, which is useless as I can simply increase the batch-size of HuggingFace model.
  • demoDiffusion on TensorRT - supports 3090, 4090, and A100
    1 project | /r/StableDiffusion | 10 Dec 2022

flash-attention

Posts with mentions or reviews of flash-attention. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • How the Transformer Architecture Was Likely Discovered: A Step-by-Step Guide
    1 project | dev.to | 8 Apr 2024
    If you're looking for an implementation, I highly recommend checking out fast attention [https://github.com/Dao-AILab/flash-attention]. It's my go-to, and far better than anything we could whip up here using just PyTorch or TensorFlow.
  • Interactive Coloring with ControlNet
    1 project | news.ycombinator.com | 17 Feb 2024
    * Even if I bought a 3090, I would have to get a computer to go with it, along with a PSU and some cooling. Don't know where to start with that.

    [1] https://github.com/Dao-AILab/flash-attention/issues/190

  • Coding Self-Attention, Multi-Head Attention, Cross-Attention, Causal-Attention
    1 project | news.ycombinator.com | 14 Jan 2024
    highly recommend using Tri's implementation https://github.com/Dao-AILab/flash-attention rotary should be built in, and some group overseas even contributed alibi
  • PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants
    2 projects | /r/LocalLLaMA | 10 Dec 2023
    Doesn't seem so https://github.com/Dao-AILab/flash-attention/issues/542 No updates for a while.
  • VLLM: 24x faster LLM serving than HuggingFace Transformers
    3 projects | news.ycombinator.com | 20 Jun 2023
    I wonder how this compares to Flash Attention (https://github.com/HazyResearch/flash-attention), which is the other "memory aware" Attention project I'm aware of.

    I guess Flash Attention is more about utilizing memory GPU SRam correctly, where this is more about using the OS/CPU memory better?

  • Hacking Around ChatGPT’s Character Limits with the Code Interpreter
    1 project | news.ycombinator.com | 27 May 2023
    https://github.com/HazyResearch/flash-attention
  • Flash Attention on Consumer
    1 project | /r/LocalLLM | 10 May 2023
  • Unlimiformer: Long-Range Transformers with Unlimited Length Input
    3 projects | news.ycombinator.com | 5 May 2023
    After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.

    I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.

  • Ask HN: Bypassing GPT-4 8k tokens limit
    5 projects | news.ycombinator.com | 1 May 2023
    Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.
  • Scaling Transformer to 1M tokens and beyond with RMT
    6 projects | news.ycombinator.com | 23 Apr 2023
    Here's a list of tools for scaling up transformer context that have github repos:

    * FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention

    * Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing

    * RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM

    * RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...

    In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...

    If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.

What are some alternatives?

When comparing TensorRT and flash-attention you can also consider the following projects:

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.

FasterTransformer - Transformer related optimization, including BERT, GPT

onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX

memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

vllm - A high-throughput and memory-efficient inference and serving engine for LLMs

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference

alpaca_lora_4bit

stable-diffusion-webui - Stable Diffusion web UI

XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model