attention-is-all-you-need-pytorch
long-range-arena
Our great sponsors
attention-is-all-you-need-pytorch | long-range-arena | |
---|---|---|
3 | 6 | |
8,350 | 677 | |
- | 3.6% | |
0.0 | 0.0 | |
6 months ago | 3 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
attention-is-all-you-need-pytorch
-
ElevenLabs Launches Voice Translation Tool to Break Down Language Barriers
The transformer model was invented to attend to context over the entire sequence length. Look at how the original authors used the Transformer for NMT in the original Vaswani et al publication. https://github.com/jadore801120/attention-is-all-you-need-py...
-
Lack of activation in transformer feedforward layer?
I'm curious as to why the second matrix multiplication is not followed by an activation unlike the first one. Is there any particular reason why a non-linearity would be trivial or even avoided in the second operation? For reference, variations of this can be witnessed in a number of different implementations, including BERT-pytorch and attention-is-all-you-need-pytorch.
long-range-arena
- The Secret Sauce behind 100K context window in LLMs: all tricks in one place
-
[D] Is there a repo on which many light-weight self-attention mechanism are introduced?
1.1 Long Range Arena: A Benchmark for Efficient Transformers. From authors of above, they proposed a benchmark for modeling long range interactions. It also inlcudes a repository
-
[R][D] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Zhou et al. AAAI21 Best Paper. ProbSparse self-attention reduces complexity to O(nlogn), generative style decoder to obtainsequence output in one step, and self-attention distilling for further reducing memory
I think the paper is written in a clear style and I like that the authors included many experiments, including hyperparameter effects, ablations and extensive baseline comparisons. One thing I would have liked is them comparing their Informer to more efficient transformers (they compared only against logtrans and reformer) using the LRA (https://github.com/google-research/long-range-arena) benchmark.
What are some alternatives?
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
LFattNet - Attention-based View Selection Networks for Light-field Disparity Estimation
jax-resnet - Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).
HJxB - Continuous-Time/State/Action Fitted Value Iteration via Hamilton-Jacobi-Bellman (HJB)
BERT-pytorch - Google AI 2018 BERT pytorch implementation
OpenPrompt - An Open-Source Framework for Prompt-Learning.
scenic - Scenic: A Jax Library for Computer Vision Research and Beyond
tldr-transformers - The "tl;dr" on a few notable transformer papers (pre-2022).
flaxmodels - Pretrained deep learning models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet, etc.
elegy - A High Level API for Deep Learning in JAX
allennlp - An open-source NLP research library, built on PyTorch.