long-range-arena
awesome-fast-attention
Our great sponsors
long-range-arena | awesome-fast-attention | |
---|---|---|
6 | 1 | |
682 | 827 | |
2.9% | - | |
0.0 | 1.0 | |
4 months ago | over 2 years ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
long-range-arena
-
The Secret Sauce behind 100K context window in LLMs: all tricks in one place
https://github.com/google-research/long-range-arena
-
[R] The Annotated S4: Efficiently Modeling Long Sequences with Structured State Spaces
The Structured State Space for Sequence Modeling (S4) architecture is a new approach to very long-range sequence modeling tasks for vision, language, and audio, showing a capacity to capture dependencies over tens of thousands of steps. Especially impressive are the model’s results on the challenging Long Range Arena benchmark, showing an ability to reason over sequences of up to 16,000+ elements with high accuracy.
-
[D] Is there a repo on which many light-weight self-attention mechanism are introduced?
1.1 Long Range Arena: A Benchmark for Efficient Transformers. From authors of above, they proposed a benchmark for modeling long range interactions. It also inlcudes a repository
- [R] Google’s H-Transformer-1D: Fast One-Dimensional Hierarchical Attention With Linear Complexity for Long Sequence Processing
- [2107.11906] H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences
-
[R][D] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Zhou et al. AAAI21 Best Paper. ProbSparse self-attention reduces complexity to O(nlogn), generative style decoder to obtainsequence output in one step, and self-attention distilling for further reducing memory
I think the paper is written in a clear style and I like that the authors included many experiments, including hyperparameter effects, ablations and extensive baseline comparisons. One thing I would have liked is them comparing their Informer to more efficient transformers (they compared only against logtrans and reformer) using the LRA (https://github.com/google-research/long-range-arena) benchmark.
awesome-fast-attention
What are some alternatives?
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
how-do-vits-work - (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
attention-is-all-you-need-pytorch - A PyTorch implementation of the Transformer model in "Attention is All You Need".
a-PyTorch-Tutorial-to-Transformers - Attention Is All You Need | a PyTorch Tutorial to Transformers
HJxB - Continuous-Time/State/Action Fitted Value Iteration via Hamilton-Jacobi-Bellman (HJB)
jax-resnet - Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).
tldr-transformers - The "tl;dr" on a few notable transformer papers (pre-2022).
elegy - A High Level API for Deep Learning in JAX
LFattNet - Attention-based View Selection Networks for Light-field Disparity Estimation
scenic - Scenic: A Jax Library for Computer Vision Research and Beyond
flaxmodels - Pretrained deep learning models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet, etc.
gansformer - Generative Adversarial Transformers