flash-attention
EfficientZero
Our great sponsors
flash-attention | EfficientZero | |
---|---|---|
26 | 9 | |
10,773 | 826 | |
9.6% | - | |
9.4 | 0.0 | |
20 days ago | 4 months ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flash-attention
-
How the Transformer Architecture Was Likely Discovered: A Step-by-Step Guide
If you're looking for an implementation, I highly recommend checking out fast attention [https://github.com/Dao-AILab/flash-attention]. It's my go-to, and far better than anything we could whip up here using just PyTorch or TensorFlow.
-
Interactive Coloring with ControlNet
* Even if I bought a 3090, I would have to get a computer to go with it, along with a PSU and some cooling. Don't know where to start with that.
[1] https://github.com/Dao-AILab/flash-attention/issues/190
-
Coding Self-Attention, Multi-Head Attention, Cross-Attention, Causal-Attention
highly recommend using Tri's implementation https://github.com/Dao-AILab/flash-attention rotary should be built in, and some group overseas even contributed alibi
-
PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants
Doesn't seem so https://github.com/Dao-AILab/flash-attention/issues/542 No updates for a while.
-
VLLM: 24x faster LLM serving than HuggingFace Transformers
I wonder how this compares to Flash Attention (https://github.com/HazyResearch/flash-attention), which is the other "memory aware" Attention project I'm aware of.
I guess Flash Attention is more about utilizing memory GPU SRam correctly, where this is more about using the OS/CPU memory better?
-
Hacking Around ChatGPT’s Character Limits with the Code Interpreter
https://github.com/HazyResearch/flash-attention
- Flash Attention on Consumer
-
Unlimiformer: Long-Range Transformers with Unlimited Length Input
After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.
I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.
-
Ask HN: Bypassing GPT-4 8k tokens limit
Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.
-
Scaling Transformer to 1M tokens and beyond with RMT
Here's a list of tools for scaling up transformer context that have github repos:
* FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention
* Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing
* RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM
* RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...
In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...
If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.
EfficientZero
-
[D] GPT-3T: Can we train language models to think further ahead?
Here's an algorithm that is more sample efficient : https://github.com/YeWR/EfficientZero
-
MuZero learns to play Teamfight Tactics
Use multiprocessing to have more GPU workers could help. My code based on EfficientZero https://github.com/YeWR/EfficientZero is utilizing CPUs and GPUs to 90%. It uses Ray for multiprocessing and splits Reanalyze into CPU and GPU workers to maximize resource utilization. By the way, it's not converging to optimal policy well: it gets stuck at 50% optimal episode return at with a small amount of training. Have you had this issue before?
-
[R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved!
Found relevant code at https://github.com/YeWR/EfficientZero + all code implementations here
- Anyone found any working replication repo for MuZero?
-
[D] Most important AI Paper´s this year so far in my opinion + Proto AGI speculation at the end
Mastering Atari Games with Limited Data – EfficientZero ( Human sample -efficiency! ) Paper: https://arxiv.org/abs/2111.00210 Lesswrong article about the paper: https://www.lesswrong.com/posts/mRwJce3npmzbKfxws/efficientzero-how-it-works Github: https://github.com/YeWR/EfficientZero
-
Waymo To Use Chinese Geely Robotaxi Body. This Should Send Shivers Into Western OEMs
Have you seen https://github.com/YeWR/EfficientZero EfficientZero yet? This agent is able to solve problems with unknown rules, where the agent starts only with information about the shape of the inputs and reward feedback. With superhuman ability - it needs less training data than humans do - and SoTA trumping results on the problems it has been tried on. (various atari/Go/chess/etc)
-
Why does EfficientZero use SimSiam for temporal consistency instead of MAE / MSE?
Open-source codebase for EfficientZero - am I missing something or the repo is empty?
-
[D] Paper Explained - EfficientZero: Mastering Atari Games with Limited Data (Full Video Analysis)
Code: https://github.com/YeWR/EfficientZero
-
"EfficientZero: Mastering Atari Games with Limited Data", Ye et al 2021 (beating humans on ALE-100k/2h by adding self-supervised learning to MuZero-Reanalyze)
Code for https://arxiv.org/abs/2111.00210 found: https://github.com/YeWR/EfficientZero
What are some alternatives?
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
flash-attention-jax - Implementation of Flash Attention in Jax
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
RHO-Loss
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
CodeRL - This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).
alpaca_lora_4bit
msn - Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)