flash-attention VS heinsen_routing

Compare flash-attention vs heinsen_routing and see what are their differences.

flash-attention

Fast and memory-efficient exact attention (by Dao-AILab)

heinsen_routing

Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks. (by glassroom)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
flash-attention heinsen_routing
26 7
10,888 160
4.7% 0.0%
9.4 2.7
6 days ago about 1 year ago
Python Python
BSD 3-clause "New" or "Revised" License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

flash-attention

Posts with mentions or reviews of flash-attention. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • How the Transformer Architecture Was Likely Discovered: A Step-by-Step Guide
    1 project | dev.to | 8 Apr 2024
    If you're looking for an implementation, I highly recommend checking out fast attention [https://github.com/Dao-AILab/flash-attention]. It's my go-to, and far better than anything we could whip up here using just PyTorch or TensorFlow.
  • Interactive Coloring with ControlNet
    1 project | news.ycombinator.com | 17 Feb 2024
    * Even if I bought a 3090, I would have to get a computer to go with it, along with a PSU and some cooling. Don't know where to start with that.

    [1] https://github.com/Dao-AILab/flash-attention/issues/190

  • Coding Self-Attention, Multi-Head Attention, Cross-Attention, Causal-Attention
    1 project | news.ycombinator.com | 14 Jan 2024
    highly recommend using Tri's implementation https://github.com/Dao-AILab/flash-attention rotary should be built in, and some group overseas even contributed alibi
  • PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants
    2 projects | /r/LocalLLaMA | 10 Dec 2023
    Doesn't seem so https://github.com/Dao-AILab/flash-attention/issues/542 No updates for a while.
  • VLLM: 24x faster LLM serving than HuggingFace Transformers
    3 projects | news.ycombinator.com | 20 Jun 2023
    I wonder how this compares to Flash Attention (https://github.com/HazyResearch/flash-attention), which is the other "memory aware" Attention project I'm aware of.

    I guess Flash Attention is more about utilizing memory GPU SRam correctly, where this is more about using the OS/CPU memory better?

  • Hacking Around ChatGPT’s Character Limits with the Code Interpreter
    1 project | news.ycombinator.com | 27 May 2023
    https://github.com/HazyResearch/flash-attention
  • Flash Attention on Consumer
    1 project | /r/LocalLLM | 10 May 2023
  • Unlimiformer: Long-Range Transformers with Unlimited Length Input
    3 projects | news.ycombinator.com | 5 May 2023
    After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.

    I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.

  • Ask HN: Bypassing GPT-4 8k tokens limit
    5 projects | news.ycombinator.com | 1 May 2023
    Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.
  • Scaling Transformer to 1M tokens and beyond with RMT
    6 projects | news.ycombinator.com | 23 Apr 2023
    Here's a list of tools for scaling up transformer context that have github repos:

    * FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention

    * Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing

    * RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM

    * RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...

    In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...

    If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.

heinsen_routing

Posts with mentions or reviews of heinsen_routing. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-27.
  • What can LLMs never do?
    4 projects | news.ycombinator.com | 27 Apr 2024
    At one point I experimented a little with transformers that had access to external memory searchable via KNN lookups https://github.com/lucidrains/memorizing-transformers-pytorc... or via routed queries with https://github.com/glassroom/heinsen_routing . Both approaches seemed to work for me, but I had to put that work on hold for reasons outside my control.
  • A Surprisingly Effective Way to Estimate Token Importance in LLM Prompts
    1 project | news.ycombinator.com | 12 Sep 2023
    Simple and in hindsight, obvious:

    1. Run the text through a document embeddding model and save the embedding.

    2. Remove one token at a time, and compute the cosine similarity of the new document embedding to the original one.

    3. Compute importance as a function of the change in cosine similarity.

    Nice.

    Also check out https://github.com/glassroom/heinsen_routing . It takes n embeddings and outputs m embeddings, and also gives you an n×m matrix with credit assignments, without having to remove tokens one by one, which can be prohibitively slow for long texts.

  • Unlimiformer: Long-Range Transformers with Unlimited Length Input
    3 projects | news.ycombinator.com | 5 May 2023
    After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.

    I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.

  • Scaling Transformer to 1M tokens and beyond with RMT
    6 projects | news.ycombinator.com | 23 Apr 2023
    Here's a list of tools for scaling up transformer context that have github repos:

    * FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention

    * Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing

    * RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM

    * RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...

    In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...

    If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.

  • From Deep to Long Learning
    6 projects | news.ycombinator.com | 9 Apr 2023
    I imagine you could, maybe by using something like this https://github.com/glassroom/heinsen_routing#sequence-to-vec... ... but I doubt you'd be able to match the training efficiency of triangular masking in auto-regressive transformers. With routing, you'd have to train the model one time-step at a time, instead of all time-steps in parallel like a masked auto-regressive transformer.
  • New algorithm can route sequences with 1M+ token embeddings in one GPU
    1 project | news.ycombinator.com | 20 Dec 2022

What are some alternatives?

When comparing flash-attention and heinsen_routing you can also consider the following projects:

xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.

safari - Convolutions for Sequence Modeling

TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

block-recurrent-transformer-pytorch - Implementation of Block Recurrent Transformer - Pytorch

memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.

block-recurrent-transformer-py

alpaca_lora_4bit

recurrent-memory-transformer - [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.