performer-pytorch
reformer-pytorch
performer-pytorch | reformer-pytorch | |
---|---|---|
2 | 2 | |
1,055 | 2,058 | |
- | - | |
1.8 | 1.8 | |
over 2 years ago | 11 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
performer-pytorch
-
[R] Rotary Positional Embeddings - a new relative positional embedding for Transformers that significantly improves convergence (20-30%) and works for both regular and efficient attention
Performer is the best linear attention variant, but linear attention is just one type of efficient attention solution. I have rotary embeddings already in the repo https://github.com/lucidrains/performer-pytorch and you can witness this phenomenon yourself by toggling it on / off
-
Why has Google's Performer model not replaced traditional softmax attention?
Here's an PyTorch implementation if you want to play around with it: lucidrains/performer-pytorch: An implementation of Performer, a linear attention-based transformer, in Pytorch (github.com)
reformer-pytorch
-
[D] How to do Long Text ( > 10k tokens) Summarization?
The lucidrains implementation of Reformer can handle tens of thousands of tokens on Google Colab (with batch size 1).
-
[R]How to go about non-reproducible research?
This is what I call great code : https://github.com/lucidrains/reformer-pytorch
What are some alternatives?
long-range-arena - Long Range Arena for Benchmarking Efficient Transformers
vit-pytorch - Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Perceiver - Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow
simpleT5 - simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
Fast-Transformer - An implementation of Fastformer: Additive Attention Can Be All You Need, a Transformer Variant in TensorFlow
LSTM-FCN - Codebase for the paper LSTM Fully Convolutional Networks for Time Series Classification
deep-implicit-attention - Implementation of deep implicit attention in PyTorch
Conformer - An implementation of Conformer: Convolution-augmented Transformer for Speech Recognition, a Transformer Variant in TensorFlow/Keras
scenic - Scenic: A Jax Library for Computer Vision Research and Beyond
DeepPoseKit - a toolkit for pose estimation using deep learning