long-range-arena
performer-pytorch
Our great sponsors
long-range-arena | performer-pytorch | |
---|---|---|
6 | 2 | |
682 | 1,055 | |
2.9% | - | |
0.0 | 1.8 | |
4 months ago | about 2 years ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
long-range-arena
-
The Secret Sauce behind 100K context window in LLMs: all tricks in one place
https://github.com/google-research/long-range-arena
-
[R] The Annotated S4: Efficiently Modeling Long Sequences with Structured State Spaces
The Structured State Space for Sequence Modeling (S4) architecture is a new approach to very long-range sequence modeling tasks for vision, language, and audio, showing a capacity to capture dependencies over tens of thousands of steps. Especially impressive are the model’s results on the challenging Long Range Arena benchmark, showing an ability to reason over sequences of up to 16,000+ elements with high accuracy.
-
[D] Is there a repo on which many light-weight self-attention mechanism are introduced?
1.1 Long Range Arena: A Benchmark for Efficient Transformers. From authors of above, they proposed a benchmark for modeling long range interactions. It also inlcudes a repository
- [R] Google’s H-Transformer-1D: Fast One-Dimensional Hierarchical Attention With Linear Complexity for Long Sequence Processing
- [2107.11906] H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences
-
[R][D] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Zhou et al. AAAI21 Best Paper. ProbSparse self-attention reduces complexity to O(nlogn), generative style decoder to obtainsequence output in one step, and self-attention distilling for further reducing memory
I think the paper is written in a clear style and I like that the authors included many experiments, including hyperparameter effects, ablations and extensive baseline comparisons. One thing I would have liked is them comparing their Informer to more efficient transformers (they compared only against logtrans and reformer) using the LRA (https://github.com/google-research/long-range-arena) benchmark.
performer-pytorch
-
[R] Rotary Positional Embeddings - a new relative positional embedding for Transformers that significantly improves convergence (20-30%) and works for both regular and efficient attention
Performer is the best linear attention variant, but linear attention is just one type of efficient attention solution. I have rotary embeddings already in the repo https://github.com/lucidrains/performer-pytorch and you can witness this phenomenon yourself by toggling it on / off
-
Why has Google's Performer model not replaced traditional softmax attention?
Here's an PyTorch implementation if you want to play around with it: lucidrains/performer-pytorch: An implementation of Performer, a linear attention-based transformer, in Pytorch (github.com)
What are some alternatives?
attention-is-all-you-need-pytorch - A PyTorch implementation of the Transformer model in "Attention is All You Need".
Perceiver - Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow
HJxB - Continuous-Time/State/Action Fitted Value Iteration via Hamilton-Jacobi-Bellman (HJB)
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
jax-resnet - Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).
reformer-pytorch - Reformer, the efficient Transformer, in Pytorch
tldr-transformers - The "tl;dr" on a few notable transformer papers (pre-2022).
vit-pytorch - Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
elegy - A High Level API for Deep Learning in JAX
deep-implicit-attention - Implementation of deep implicit attention in PyTorch
LFattNet - Attention-based View Selection Networks for Light-field Disparity Estimation
scenic - Scenic: A Jax Library for Computer Vision Research and Beyond