performer-pytorch
vit-pytorch
performer-pytorch | vit-pytorch | |
---|---|---|
2 | 11 | |
1,088 | 20,375 | |
- | - | |
1.8 | 7.4 | |
almost 3 years ago | 3 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
performer-pytorch
-
[R] Rotary Positional Embeddings - a new relative positional embedding for Transformers that significantly improves convergence (20-30%) and works for both regular and efficient attention
Performer is the best linear attention variant, but linear attention is just one type of efficient attention solution. I have rotary embeddings already in the repo https://github.com/lucidrains/performer-pytorch and you can witness this phenomenon yourself by toggling it on / off
-
Why has Google's Performer model not replaced traditional softmax attention?
Here's an PyTorch implementation if you want to play around with it: lucidrains/performer-pytorch: An implementation of Performer, a linear attention-based transformer, in Pytorch (github.com)
vit-pytorch
-
Is it easier to go from Pytorch to TF and Keras than the other way around?
I also need to learn Pyspark so right now I am going to download the Fashion Mnist dataset, use Pyspark to downsize each image and put the into separate folders according to their labels (just to show employers I can do some basic ETL with Pyspark, not sure how I am going to load for training in Pytorch yet though). Then I am going to write the simplest Le Net to try to categorize the fashion MNIST dataset (results will most likely be bad but it's okay). Next, try to learn transfer learning in Pytorch for both CNN or maybe skip ahead to ViT. Ideally at this point I want to study the Attention mechanism a bit more and try to implement Simple Vit which I saw here: https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/simple_vit.py
-
What are the best resources online to learn attention and transformers?
For code implementation, check out this git repo. It contains fairly straightforward PyTorch implementations for various ViT papers with references.
-
Training CNN/VIT on very small dataset
For ViT’s specifically, there’s been a good amount of research trying to extend ViT’s to work on small datasets without a large amount of pre-training (which comes with its own host of issues such as the best way to fine tune such a huge model). One paper which comes to mind is ViT’s for small datasets (https://arxiv.org/abs/2112.13492), which has an implementation in lucidrain’s repo here: https://github.com/lucidrains/vit-pytorch
-
Transformers in RL
Here's a pytorch implementation of ViT https://github.com/lucidrains/vit-pytorch
-
[P] Release the Vision Transformer Cookbook with Tensorflow ! (Thanks to @lucidrains)
looks great Junho! i've linked to it from https://github.com/lucidrains/vit-pytorch like you asked :)
-
Will Transformers Take over Artificial Intelligence?
Sure thing. Also if you're getting into transformers I'd recommend lucidrains's GitHub[0] since it has a large collection of them with links to papers. It's nice that things are consolidated.
[0] https://github.com/lucidrains/vit-pytorch
-
[D] Surprisingly Simple SOTA Self-Supervised Pretraining - Masked Autoencoders Are Scalable Vision Learners by Kaiming He et al. explained (5-minute summary by Casual GAN Papers)
nah, it is really simple. here is the code https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/mae.py
-
[D] Training vision transformers on a specific dataset from scratch
lucid rains VI has all of what you may need in a clean API
- Can I train a tranaformer for image classification on Google colab??
-
[R] Rotary Positional Embeddings - a new relative positional embedding for Transformers that significantly improves convergence (20-30%) and works for both regular and efficient attention
I've attempted it here https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/rvt.py but those who have tried it haven't seen knock out results as 1d. Perhaps the axial lengths are too small to see a benefit
What are some alternatives?
long-range-arena - Long Range Arena for Benchmarking Efficient Transformers
reformer-pytorch - Reformer, the efficient Transformer, in Pytorch
Perceiver - Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow
MLP-Mixer-pytorch - Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
convolution-vision-transformers - PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
deep-implicit-attention - Implementation of deep implicit attention in PyTorch
efficient-attention - An implementation of the efficient attention module.
scenic - Scenic: A Jax Library for Computer Vision Research and Beyond
Compact-Transformers - Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)