jax-resnet
long-range-arena
jax-resnet | long-range-arena | |
---|---|---|
1 | 6 | |
98 | 683 | |
- | 0.9% | |
0.0 | 0.0 | |
almost 2 years ago | 5 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jax-resnet
long-range-arena
-
The Secret Sauce behind 100K context window in LLMs: all tricks in one place
https://github.com/google-research/long-range-arena
-
[R] The Annotated S4: Efficiently Modeling Long Sequences with Structured State Spaces
The Structured State Space for Sequence Modeling (S4) architecture is a new approach to very long-range sequence modeling tasks for vision, language, and audio, showing a capacity to capture dependencies over tens of thousands of steps. Especially impressive are the model’s results on the challenging Long Range Arena benchmark, showing an ability to reason over sequences of up to 16,000+ elements with high accuracy.
-
[D] Is there a repo on which many light-weight self-attention mechanism are introduced?
1.1 Long Range Arena: A Benchmark for Efficient Transformers. From authors of above, they proposed a benchmark for modeling long range interactions. It also inlcudes a repository
- [R] Google’s H-Transformer-1D: Fast One-Dimensional Hierarchical Attention With Linear Complexity for Long Sequence Processing
- [2107.11906] H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences
-
[R][D] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Zhou et al. AAAI21 Best Paper. ProbSparse self-attention reduces complexity to O(nlogn), generative style decoder to obtainsequence output in one step, and self-attention distilling for further reducing memory
I think the paper is written in a clear style and I like that the authors included many experiments, including hyperparameter effects, ablations and extensive baseline comparisons. One thing I would have liked is them comparing their Informer to more efficient transformers (they compared only against logtrans and reformer) using the LRA (https://github.com/google-research/long-range-arena) benchmark.
What are some alternatives?
dm-haiku - JAX-based neural network library
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
flaxmodels - Pretrained deep learning models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet, etc.
attention-is-all-you-need-pytorch - A PyTorch implementation of the Transformer model in "Attention is All You Need".
pax - A stateful pytree library for training neural networks.
HJxB - Continuous-Time/State/Action Fitted Value Iteration via Hamilton-Jacobi-Bellman (HJB)
faceswap - Deepfakes Software For All
elegy - A High Level API for Deep Learning in JAX
LFattNet - Attention-based View Selection Networks for Light-field Disparity Estimation
tldr-transformers - The "tl;dr" on a few notable transformer papers (pre-2022).