The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →
Long-range-arena Alternatives
Similar projects and alternatives to long-range-arena
-
performer-pytorch
An implementation of Performer, a linear attention-based transformer, in Pytorch
-
attention-is-all-you-need-pytorch
A PyTorch implementation of the Transformer model in "Attention is All You Need".
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
HJxB
Continuous-Time/State/Action Fitted Value Iteration via Hamilton-Jacobi-Bellman (HJB)
-
jax-resnet
Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).
-
scenic
Scenic: A Jax Library for Computer Vision Research and Beyond (by google-research)
-
tldr-transformers
Discontinued The "tl;dr" on a few notable transformer papers (pre-2022).
-
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
LFattNet
Attention-based View Selection Networks for Light-field Disparity Estimation
-
flaxmodels
Pretrained deep learning models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet, etc.
-
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
-
Informer2020
The GitHub repository for the paper "Informer" accepted by AAAI 2021.
-
awesome-fast-attention
Discontinued list of efficient attention modules
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
long-range-arena reviews and mentions
- The Secret Sauce behind 100K context window in LLMs: all tricks in one place
-
[D] Is there a repo on which many light-weight self-attention mechanism are introduced?
1.1 Long Range Arena: A Benchmark for Efficient Transformers. From authors of above, they proposed a benchmark for modeling long range interactions. It also inlcudes a repository
-
[R][D] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Zhou et al. AAAI21 Best Paper. ProbSparse self-attention reduces complexity to O(nlogn), generative style decoder to obtainsequence output in one step, and self-attention distilling for further reducing memory
I think the paper is written in a clear style and I like that the authors included many experiments, including hyperparameter effects, ablations and extensive baseline comparisons. One thing I would have liked is them comparing their Informer to more efficient transformers (they compared only against logtrans and reformer) using the LRA (https://github.com/google-research/long-range-arena) benchmark.
-
A note from our sponsor - WorkOS
workos.com | 28 Mar 2024
Stats
google-research/long-range-arena is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of long-range-arena is Python.
Popular Comparisons
- long-range-arena VS performer-pytorch
- long-range-arena VS attention-is-all-you-need-pytorch
- long-range-arena VS HJxB
- long-range-arena VS jax-resnet
- long-range-arena VS scenic
- long-range-arena VS tldr-transformers
- long-range-arena VS elegy
- long-range-arena VS LFattNet
- long-range-arena VS flaxmodels
- long-range-arena VS gansformer