RWKV-LM VS flash-attention

Compare RWKV-LM vs flash-attention and see what are their differences.

RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding. (by BlinkDL)

flash-attention

Fast and memory-efficient exact attention (by Dao-AILab)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
RWKV-LM flash-attention
84 26
11,657 10,888
- 5.7%
8.8 9.4
4 days ago 7 days ago
Python Python
Apache License 2.0 BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

RWKV-LM

Posts with mentions or reviews of RWKV-LM. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-09.
  • Do LLMs need a context window?
    1 project | news.ycombinator.com | 25 Dec 2023
    https://github.com/BlinkDL/RWKV-LM#rwkv-discord-httpsdiscord... lists a number of implementations of various versions of RWKV.

    https://github.com/BlinkDL/RWKV-LM#rwkv-parallelizable-rnn-w... :

    > RWKV: Parallelizable RNN with Transformer-level LLM Performance (pronounced as "RwaKuv", from 4 major params: R W K V)

    > RWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the "GPT" mode to quickly compute the hidden state for the "RNN" mode.

    > So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding (using the final hidden state).

    > "Our latest version is RWKV-6,*

  • People who've used RWKV, whats your wishlist for it?
    9 projects | /r/LocalLLaMA | 9 Dec 2023
  • Paving the way to efficient architectures: StripedHyena-7B
    1 project | news.ycombinator.com | 8 Dec 2023
  • Understanding Deep Learning
    1 project | news.ycombinator.com | 26 Nov 2023
    That is not true. There are RNNs with transformer/LLM-like performance. See https://github.com/BlinkDL/RWKV-LM.
  • Q-Transformer: Scalable Reinforcement Learning via Autoregressive Q-Functions
    3 projects | news.ycombinator.com | 19 Sep 2023
    This is what RWKV (https://github.com/BlinkDL/RWKV-LM) was made for, and what it will be good at.

    Wow. Pretty darn cool! <3 :'))))

  • Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
    14 projects | /r/ChatGPT | 30 Jun 2023
    Thanks for the support! Two weeks ago, I'd have said longer contexts on small on-device LLMs are at least a year away, but developments from last week seem to indicate that it's well within reach. Once the low hanging product features are done, I think it's a worthy problem to spend a couple of weeks or perhaps even months on. Speaking of context lengths, recurrent models like RWKV technically have infinite context lengths, but in practice the context slowly fades away after a few thousands of tokens.
  • "If you see a startup claiming to possess top-secret results leading to human level AI, they're lying or delusional. Don't believe them!" - Yann LeCun, on the conspiracy theories of "X company has reached AGI in secret"
    1 project | /r/singularity | 26 Jun 2023
    This is the reason there are only a few AI labs, and they show little of the theoretical and scientific understanding you believe is required. Go check their code, there's nothing there. Even the transformer with it's heads and other architectural elements turns out to not do anything and it is less efficient than RNNs. (see https://github.com/BlinkDL/RWKV-LM)
  • The Secret Sauce behind 100K context window in LLMs: all tricks in one place
    3 projects | news.ycombinator.com | 17 Jun 2023
    I've been pondering the same thing, as simply extending the context window in a straightforward manner would lead to a significant increase in computational resources. I've had the opportunity to experiment with Anthropics' 100k model, and it's evident that they're employing some clever techniques to make it work, albeit with some imperfections. One interesting observation is that their prompt guide recommends placing instructions after the reference text when inputting lengthy text bodies. I noticed that the model often disregarded the instructions if placed beforehand. It's clear that the model doesn't allocate the same level of "attention" to all parts of the input across the entire context window.

    Moreover, the inability to cache transformers makes the use of large context windows quite costly, as all previous messages must be sent with each call. In this context, the RWKV-LM project on GitHub (https://github.com/BlinkDL/RWKV-LM) might offer a solution. They claim to achieve performance comparable to transformers using an RNN, which could potentially handle a 100-page document and cache it, thereby eliminating the need to process the entire document with each subsequent query. However, I suspect RWKV might fall short in handling complex tasks that require maintaining multiple variables in memory, such as mathematical computations, but it should suffice for many scenarios.

    On a related note, I believe Anthropics' Claude is somewhat underappreciated. In some instances, it outperforms GPT4, and I'd rank it somewhere between GPT4 and Bard overall.

  • Meta's plan to offer free commercial AI models puts pressure on Google, OpenAI
    1 project | news.ycombinator.com | 16 Jun 2023
    > The only reason open-source LLMs have a heartbeat is they’re standing on Meta’s weights.

    Not necessarily.

    RWKV, for example, is a different architecture that wasn't based on Facebook's weights whatsoever. I don't know where BlinkDL (the author) got the training data, but they seem to have done everything mostly independently otherwise.

    https://github.com/BlinkDL/RWKV-LM

    disclaimer: I've been doing a lot of work lately on an implementation of CPU inference for this model, so I'm obviously somewhat biased since this is the model I have the most experience in.

  • Eliezer Yudkowsky - open letter on AI
    1 project | /r/HPMOR | 15 Jun 2023
    I think the main concern is that, due to the resources put into LLM research for finding new ways to refine and improve them, that work can then be used by projects that do go the extra mile and create things that are more than just LLMs. For example, RWKV is similar to an LLM but will actually change its own model after every processed token, thus letting it remember things longer-term without the use of 'context tokens'.

flash-attention

Posts with mentions or reviews of flash-attention. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • How the Transformer Architecture Was Likely Discovered: A Step-by-Step Guide
    1 project | dev.to | 8 Apr 2024
    If you're looking for an implementation, I highly recommend checking out fast attention [https://github.com/Dao-AILab/flash-attention]. It's my go-to, and far better than anything we could whip up here using just PyTorch or TensorFlow.
  • Interactive Coloring with ControlNet
    1 project | news.ycombinator.com | 17 Feb 2024
    * Even if I bought a 3090, I would have to get a computer to go with it, along with a PSU and some cooling. Don't know where to start with that.

    [1] https://github.com/Dao-AILab/flash-attention/issues/190

  • Coding Self-Attention, Multi-Head Attention, Cross-Attention, Causal-Attention
    1 project | news.ycombinator.com | 14 Jan 2024
    highly recommend using Tri's implementation https://github.com/Dao-AILab/flash-attention rotary should be built in, and some group overseas even contributed alibi
  • PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants
    2 projects | /r/LocalLLaMA | 10 Dec 2023
    Doesn't seem so https://github.com/Dao-AILab/flash-attention/issues/542 No updates for a while.
  • VLLM: 24x faster LLM serving than HuggingFace Transformers
    3 projects | news.ycombinator.com | 20 Jun 2023
    I wonder how this compares to Flash Attention (https://github.com/HazyResearch/flash-attention), which is the other "memory aware" Attention project I'm aware of.

    I guess Flash Attention is more about utilizing memory GPU SRam correctly, where this is more about using the OS/CPU memory better?

  • Hacking Around ChatGPT’s Character Limits with the Code Interpreter
    1 project | news.ycombinator.com | 27 May 2023
    https://github.com/HazyResearch/flash-attention
  • Flash Attention on Consumer
    1 project | /r/LocalLLM | 10 May 2023
  • Unlimiformer: Long-Range Transformers with Unlimited Length Input
    3 projects | news.ycombinator.com | 5 May 2023
    After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.

    I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.

  • Ask HN: Bypassing GPT-4 8k tokens limit
    5 projects | news.ycombinator.com | 1 May 2023
    Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.
  • Scaling Transformer to 1M tokens and beyond with RMT
    6 projects | news.ycombinator.com | 23 Apr 2023
    Here's a list of tools for scaling up transformer context that have github repos:

    * FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention

    * Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing

    * RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM

    * RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...

    In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...

    If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.

What are some alternatives?

When comparing RWKV-LM and flash-attention you can also consider the following projects:

llama - Inference code for Llama models

xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.

alpaca-lora - Instruct-tune LLaMA on consumer hardware

TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

gpt4all - gpt4all: run open-source LLMs anywhere

memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

RWKV-CUDA - The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )

alpaca_lora_4bit

nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.

XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model