memory-efficient-attention-pytorch
vit-pytorch
memory-efficient-attention-pytorch | vit-pytorch | |
---|---|---|
2 | 11 | |
227 | 20,864 | |
- | - | |
6.1 | 7.8 | |
over 1 year ago | 11 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
memory-efficient-attention-pytorch
-
[Discussion] Fine tune model for long context
Check these efficient attention mechanism which are almost a drop in replacement: efficient attention flash attention
-
Will Transformers Take over Artificial Intelligence?
I would recommend Routing Transformer https://github.com/lucidrains/routing-transformer but the real truth is nothing beats full attention. Luckily, someone recently figured out how to get past the memory bottleneck. https://github.com/lucidrains/memory-efficient-attention-pyt...
vit-pytorch
-
Is it easier to go from Pytorch to TF and Keras than the other way around?
I also need to learn Pyspark so right now I am going to download the Fashion Mnist dataset, use Pyspark to downsize each image and put the into separate folders according to their labels (just to show employers I can do some basic ETL with Pyspark, not sure how I am going to load for training in Pytorch yet though). Then I am going to write the simplest Le Net to try to categorize the fashion MNIST dataset (results will most likely be bad but it's okay). Next, try to learn transfer learning in Pytorch for both CNN or maybe skip ahead to ViT. Ideally at this point I want to study the Attention mechanism a bit more and try to implement Simple Vit which I saw here: https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/simple_vit.py
-
What are the best resources online to learn attention and transformers?
For code implementation, check out this git repo. It contains fairly straightforward PyTorch implementations for various ViT papers with references.
-
Training CNN/VIT on very small dataset
For ViT’s specifically, there’s been a good amount of research trying to extend ViT’s to work on small datasets without a large amount of pre-training (which comes with its own host of issues such as the best way to fine tune such a huge model). One paper which comes to mind is ViT’s for small datasets (https://arxiv.org/abs/2112.13492), which has an implementation in lucidrain’s repo here: https://github.com/lucidrains/vit-pytorch
-
Transformers in RL
Here's a pytorch implementation of ViT https://github.com/lucidrains/vit-pytorch
-
[P] Release the Vision Transformer Cookbook with Tensorflow ! (Thanks to @lucidrains)
looks great Junho! i've linked to it from https://github.com/lucidrains/vit-pytorch like you asked :)
-
Will Transformers Take over Artificial Intelligence?
Sure thing. Also if you're getting into transformers I'd recommend lucidrains's GitHub[0] since it has a large collection of them with links to papers. It's nice that things are consolidated.
[0] https://github.com/lucidrains/vit-pytorch
-
[D] Surprisingly Simple SOTA Self-Supervised Pretraining - Masked Autoencoders Are Scalable Vision Learners by Kaiming He et al. explained (5-minute summary by Casual GAN Papers)
nah, it is really simple. here is the code https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/mae.py
-
[D] Training vision transformers on a specific dataset from scratch
lucid rains VI has all of what you may need in a clean API
- Can I train a tranaformer for image classification on Google colab??
-
[R] Rotary Positional Embeddings - a new relative positional embedding for Transformers that significantly improves convergence (20-30%) and works for both regular and efficient attention
I've attempted it here https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/rvt.py but those who have tried it haven't seen knock out results as 1d. Perhaps the axial lengths are too small to see a benefit
What are some alternatives?
flash-attention - Fast and memory-efficient exact attention
reformer-pytorch - Reformer, the efficient Transformer, in Pytorch
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
MLP-Mixer-pytorch - Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision
x-transformers - A concise but complete full-attention transformer with a set of promising experimental features from various papers
convolution-vision-transformers - PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers
memory-efficient-attention-pyt
Compact-Transformers - Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
routing-transformer - Fully featured implementation of Routing Transformer
efficient-attention - An implementation of the efficient attention module.