SaaSHub helps you find the best software and product alternatives Learn more →
Flash-attention Alternatives
Similar projects and alternatives to flash-attention
-
diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
-
RWKV-LM
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
-
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
-
-
-
-
kernl
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
-
TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
-
RWKV-v2-RNN-Pile
RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
-
memory-efficient-attention-pytorch
Discontinued Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
-
heinsen_routing
Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks.
-
-
-
-
CodeRL
This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).
flash-attention discussion
flash-attention reviews and mentions
-
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-Precision
1) Pretty much, it's mathematically equivalent. The only software issues are things like managing dependency versions and data formats in-memory, but Flash Attention 2 is already built into HuggingFace and other popular libraries. Flash Attention 3 probably will be soon, although it requires an H100 GPU to run
2) Flash Attention 2 added support for GQA in past version updates:
https://github.com/Dao-AILab/flash-attention
3) They're comparing this implementation of Flash Attention (which is written in raw CUDA C++) to the Triton implementation of a similar algorithm (which is written in Triton): https://triton-lang.org/main/getting-started/tutorials/06-fu...
-
How the Transformer Architecture Was Likely Discovered: A Step-by-Step Guide
If you're looking for an implementation, I highly recommend checking out fast attention [https://github.com/Dao-AILab/flash-attention]. It's my go-to, and far better than anything we could whip up here using just PyTorch or TensorFlow.
-
Interactive Coloring with ControlNet
* Even if I bought a 3090, I would have to get a computer to go with it, along with a PSU and some cooling. Don't know where to start with that.
[1] https://github.com/Dao-AILab/flash-attention/issues/190
-
Coding Self-Attention, Multi-Head Attention, Cross-Attention, Causal-Attention
highly recommend using Tri's implementation https://github.com/Dao-AILab/flash-attention rotary should be built in, and some group overseas even contributed alibi
-
PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants
Doesn't seem so https://github.com/Dao-AILab/flash-attention/issues/542 No updates for a while.
-
VLLM: 24x faster LLM serving than HuggingFace Transformers
I wonder how this compares to Flash Attention (https://github.com/HazyResearch/flash-attention), which is the other "memory aware" Attention project I'm aware of.
I guess Flash Attention is more about utilizing memory GPU SRam correctly, where this is more about using the OS/CPU memory better?
-
Hacking Around ChatGPT’s Character Limits with the Code Interpreter
https://github.com/HazyResearch/flash-attention
- Flash Attention on Consumer
-
Unlimiformer: Long-Range Transformers with Unlimited Length Input
After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.
I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.
-
Ask HN: Bypassing GPT-4 8k tokens limit
Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.
-
A note from our sponsor - SaaSHub
www.saashub.com | 23 Jan 2025
Stats
Dao-AILab/flash-attention is an open source project licensed under BSD 3-clause "New" or "Revised" License which is an OSI approved license.
The primary programming language of flash-attention is Python.
Popular Comparisons
- flash-attention VS xformers
- flash-attention VS TensorRT
- flash-attention VS DeepSpeed
- flash-attention VS RWKV-LM
- flash-attention VS memory-efficient-attention-pytorch
- flash-attention VS XMem
- flash-attention VS alpaca_lora_4bit
- flash-attention VS RWKV-v2-RNN-Pile
- flash-attention VS kernl
- flash-attention VS flash-attention-jax