memory-efficient-attention-pytorch
Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory" (by lucidrains)
memory-efficient-attention-pyt
By lucidrains
memory-efficient-attention-pytorch | memory-efficient-attention-pyt | |
---|---|---|
2 | 1 | |
227 | - | |
- | - | |
6.1 | - | |
over 2 years ago | - | |
Python | ||
MIT License | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
memory-efficient-attention-pytorch
Posts with mentions or reviews of memory-efficient-attention-pytorch.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-06-09.
-
[Discussion] Fine tune model for long context
Check these efficient attention mechanism which are almost a drop in replacement: efficient attention flash attention
-
Will Transformers Take over Artificial Intelligence?
I would recommend Routing Transformer https://github.com/lucidrains/routing-transformer but the real truth is nothing beats full attention. Luckily, someone recently figured out how to get past the memory bottleneck. https://github.com/lucidrains/memory-efficient-attention-pyt...
memory-efficient-attention-pyt
Posts with mentions or reviews of memory-efficient-attention-pyt.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-03-10.
-
Will Transformers Take over Artificial Intelligence?
I would recommend Routing Transformer https://github.com/lucidrains/routing-transformer but the real truth is nothing beats full attention. Luckily, someone recently figured out how to get past the memory bottleneck. https://github.com/lucidrains/memory-efficient-attention-pyt...
What are some alternatives?
When comparing memory-efficient-attention-pytorch and memory-efficient-attention-pyt you can also consider the following projects:
flash-attention - Fast and memory-efficient exact attention
routing-transformer - Fully featured implementation of Routing Transformer
vit-pytorch - Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Compact-Transformers - Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
memory-efficient-attention-pytorch vs flash-attention
memory-efficient-attention-pyt vs routing-transformer
memory-efficient-attention-pytorch vs vit-pytorch
memory-efficient-attention-pyt vs Compact-Transformers
memory-efficient-attention-pytorch vs performer-pytorch
memory-efficient-attention-pyt vs vit-pytorch