deep-implicit-attention
Implementation of deep implicit attention in PyTorch (by mcbal)
performer-pytorch
An implementation of Performer, a linear attention-based transformer, in Pytorch (by lucidrains)
deep-implicit-attention | performer-pytorch | |
---|---|---|
1 | 2 | |
61 | 1,055 | |
- | - | |
0.0 | 1.8 | |
over 2 years ago | about 2 years ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
deep-implicit-attention
Posts with mentions or reviews of deep-implicit-attention.
We have used some of these posts to build our list of alternatives
and similar projects.
-
[P] Deep Implicit Attention: A Mean-Field Theory Perspective on Attention Mechanisms
Code: https://github.com/mcbal/deep-implicit-attention
performer-pytorch
Posts with mentions or reviews of performer-pytorch.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-04-21.
-
[R] Rotary Positional Embeddings - a new relative positional embedding for Transformers that significantly improves convergence (20-30%) and works for both regular and efficient attention
Performer is the best linear attention variant, but linear attention is just one type of efficient attention solution. I have rotary embeddings already in the repo https://github.com/lucidrains/performer-pytorch and you can witness this phenomenon yourself by toggling it on / off
-
Why has Google's Performer model not replaced traditional softmax attention?
Here's an PyTorch implementation if you want to play around with it: lucidrains/performer-pytorch: An implementation of Performer, a linear attention-based transformer, in Pytorch (github.com)
What are some alternatives?
When comparing deep-implicit-attention and performer-pytorch you can also consider the following projects:
TimeSformer-pytorch - Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
long-range-arena - Long Range Arena for Benchmarking Efficient Transformers