flash-attention
xformers
Our great sponsors
flash-attention | xformers | |
---|---|---|
26 | 46 | |
10,642 | 7,473 | |
7.5% | 5.2% | |
9.4 | 9.4 | |
9 days ago | 8 days ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flash-attention
-
PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants
Doesn't seem so https://github.com/Dao-AILab/flash-attention/issues/542 No updates for a while.
-
VLLM: 24x faster LLM serving than HuggingFace Transformers
I wonder how this compares to Flash Attention (https://github.com/HazyResearch/flash-attention), which is the other "memory aware" Attention project I'm aware of.
I guess Flash Attention is more about utilizing memory GPU SRam correctly, where this is more about using the OS/CPU memory better?
-
Unlimiformer: Long-Range Transformers with Unlimited Length Input
After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.
I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.
-
Ask HN: Bypassing GPT-4 8k tokens limit
Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.
-
Scaling Transformer to 1M tokens and beyond with RMT
Here's a list of tools for scaling up transformer context that have github repos:
* FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention
* Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing
* RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM
* RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...
In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...
If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.
-
Stability AI Launches the First of Its StableLM Suite of Language Models
https://github.com/HazyResearch/flash-attention#memory
"standard attention has memory quadratic in sequence length, whereas FlashAttention has memory linear in sequence length."
-
[News] OpenAI Announced GPT-4
As posted above, it seems likely that GPT4 uses Flash Attention. Their GitHub page claims that an A100 tops out at 4k tokens. It was my understanding that this was a hard upper limit given the current hardware. So scaling to 32k wouldn't just mean throwing more compute at the problem, but rather a change in the architecture. Flash Attention is an architecture change that can achieve 32k (even 64k according to the GitHub page) context length on an A100.
- [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API)
-
[P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
The parallelization of the jobs is done on different axes: batch and attention head for the original flash attention, and Triton author added a third one, tokens, aka third dimension of Q (this important trick is now also part of flash attention CUDA implementation).
-
Turing Machines Are Recurrent Neural Networks (1996)
In 2016 Transformers didn't exist and the state of the art for neural network based NLP was using LSTMs that had a limit of maybe 100 words at most.
With new implementations like xformers[1] and flash attention[2] it is unclear where the length limit is on modern transformer models.
Flash Attention can currently scale up to 64,000 tokens on an A100.
[1] https://github.com/facebookresearch/xformers/blob/main/HOWTO...
xformers
-
Colab | Errors when installing x-formers
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. fastai 2.7.12 requires torch<2.1,>=1.7, but you have torch 2.1.0+cu118 which is incompatible. torchaudio 2.0.2+cu118 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchdata 0.6.1 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchtext 0.15.2 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchvision 0.15.2+cu118 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cu118) Python 3.10.13 (you have 3.10.12) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details xformers version: 0.0.22.post3
-
FlashAttention-2, 2x faster than FlashAttention
This enables V1. V2 is still yet to be integrated into xformers. The team replied saying it should happen this week.
See the relevant Github issue here: https://github.com/facebookresearch/xformers/issues/795
- Slow/short replies?
-
If you use xformers, an update dropped yesterday; 0.0.17 requires torch 2.0.0
Go here: https://github.com/facebookresearch/xformers/actions/runs/4543337013
-
How to Fix "Exception importing xformers" Error?
I found a suggestion to add this to the args (https://github.com/facebookresearch/xformers/issues/664):
-
Installing cuDNN to boost Stable Diffusion performance on RTX 30x and 40x graphics cards
Any idea if the new xformers pre release (0.0.17rc481) will work with this set up? https://github.com/facebookresearch/xformers/issues/693
Probably gotta wait on Facebook to update xformers to support Torch 2.0
*Stable branch of xformers isn't compatible with torch 2.0 yet. There is a dev branch that is compatible, and I tried it, but it isn't compatible with other libraries so image gen still isn't possible with both torch 2.0 and xformers. I'm going to wait until everything updates before committing to 2.0
-
LLaMA: A foundational, 65B-parameter large language model
I'm going to assume you know how to stand up and manage a distributed training cluster as a simplifying assumption. Note this is an aggressive assumption.
You would need to replicate the preprocessing steps. Replicating these steps is going to be tricky as they are not described in detail.Then you would need to implement the model using xformers [1]. Using xformers is going to save you a lot of compute spend. You will need to manually implement the backwards pass to reduce recomputation of expensive activations.
The model was trained using 2048 A100 GPUs with 80GBs of VRAM. A single 8 A100 GPU machine from Lambda Cloud costs $12.00/hr [2]. The team from meta used 256 such machines giving you a per day cost of $73,728. It takes 21 days to train this model. The upfront lower bound cost estimate of doing this is [(12.00 * 24) * 21 * 256) = ] $1,548,288 dollars assuming everything goes smoothly and your model doesn't bite it during training. You may be able to negotiate bulk pricing for these types of workloads.
That dollar value is just for the compute resources alone. Given the compute costs required you will probably also want a team composed of ML Ops engineers to monitor the training cluster and research scientists to help you with the preprocessing and model pipelines.
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
SHARK - SHARK - High Performance Machine Learning Distribution
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
alpaca_lora_4bit
llama - Inference code for Llama models