long-range-arena
tldr-transformers
Our great sponsors
long-range-arena | tldr-transformers | |
---|---|---|
6 | 4 | |
682 | 167 | |
2.9% | - | |
0.0 | 0.0 | |
4 months ago | over 1 year ago | |
Python | ||
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
long-range-arena
-
The Secret Sauce behind 100K context window in LLMs: all tricks in one place
https://github.com/google-research/long-range-arena
-
[R] The Annotated S4: Efficiently Modeling Long Sequences with Structured State Spaces
The Structured State Space for Sequence Modeling (S4) architecture is a new approach to very long-range sequence modeling tasks for vision, language, and audio, showing a capacity to capture dependencies over tens of thousands of steps. Especially impressive are the model’s results on the challenging Long Range Arena benchmark, showing an ability to reason over sequences of up to 16,000+ elements with high accuracy.
-
[D] Is there a repo on which many light-weight self-attention mechanism are introduced?
1.1 Long Range Arena: A Benchmark for Efficient Transformers. From authors of above, they proposed a benchmark for modeling long range interactions. It also inlcudes a repository
- [R] Google’s H-Transformer-1D: Fast One-Dimensional Hierarchical Attention With Linear Complexity for Long Sequence Processing
- [2107.11906] H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences
-
[R][D] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Zhou et al. AAAI21 Best Paper. ProbSparse self-attention reduces complexity to O(nlogn), generative style decoder to obtainsequence output in one step, and self-attention distilling for further reducing memory
I think the paper is written in a clear style and I like that the authors included many experiments, including hyperparameter effects, ablations and extensive baseline comparisons. One thing I would have liked is them comparing their Informer to more efficient transformers (they compared only against logtrans and reformer) using the LRA (https://github.com/google-research/long-range-arena) benchmark.
tldr-transformers
- Show HN: The “tl;dr” of Recent Transformer Papers
-
Show HN: Tl;Dr” on Transformers Papers
With the explosion in research on all things transformers, it seemed there was a need to have a single table to distill the "tl;dr" of each paper's contributions relative to each other. Here is what I got so far: https://github.com/will-thompson-k/tldr-transformers . Would love feedback - and feel free to contribute too :)
-
[P] NLP "tl;dr" Notes on Transformers
In any case, I'm liking the first glance so far. I'd just transpose the summary tables so they wouldn't get so tightly squeezed: https://github.com/will-thompson-k/tldr-transformers/blob/main/notes/bart.md
With the explosion in work on all things transformers, I felt the need to keep a single table of the "tl;dr" of various papers to distill their main takeaways: https://github.com/will-thompson-k/tldr-transformers . Would love feedback!
What are some alternatives?
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
NLP-progress - Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
attention-is-all-you-need-pytorch - A PyTorch implementation of the Transformer model in "Attention is All You Need".
FARM - :house_with_garden: Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
HJxB - Continuous-Time/State/Action Fitted Value Iteration via Hamilton-Jacobi-Bellman (HJB)
lemmatization-lists - Machine-readable lists of lemma-token pairs in 23 languages.
jax-resnet - Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).
azure-sql-db-openai - Samples on how to use Azure SQL database with Azure OpenAI
elegy - A High Level API for Deep Learning in JAX
transformers-convert
LFattNet - Attention-based View Selection Networks for Light-field Disparity Estimation
language-planner - Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"