vit-pytorch
reformer-pytorch
vit-pytorch | reformer-pytorch | |
---|---|---|
11 | 2 | |
21,491 | 2,132 | |
2.9% | - | |
7.8 | 1.8 | |
19 days ago | over 1 year ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vit-pytorch
-
Is it easier to go from Pytorch to TF and Keras than the other way around?
I also need to learn Pyspark so right now I am going to download the Fashion Mnist dataset, use Pyspark to downsize each image and put the into separate folders according to their labels (just to show employers I can do some basic ETL with Pyspark, not sure how I am going to load for training in Pytorch yet though). Then I am going to write the simplest Le Net to try to categorize the fashion MNIST dataset (results will most likely be bad but it's okay). Next, try to learn transfer learning in Pytorch for both CNN or maybe skip ahead to ViT. Ideally at this point I want to study the Attention mechanism a bit more and try to implement Simple Vit which I saw here: https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/simple_vit.py
-
What are the best resources online to learn attention and transformers?
For code implementation, check out this git repo. It contains fairly straightforward PyTorch implementations for various ViT papers with references.
-
Training CNN/VIT on very small dataset
For ViT’s specifically, there’s been a good amount of research trying to extend ViT’s to work on small datasets without a large amount of pre-training (which comes with its own host of issues such as the best way to fine tune such a huge model). One paper which comes to mind is ViT’s for small datasets (https://arxiv.org/abs/2112.13492), which has an implementation in lucidrain’s repo here: https://github.com/lucidrains/vit-pytorch
-
Transformers in RL
Here's a pytorch implementation of ViT https://github.com/lucidrains/vit-pytorch
-
[P] Release the Vision Transformer Cookbook with Tensorflow ! (Thanks to @lucidrains)
looks great Junho! i've linked to it from https://github.com/lucidrains/vit-pytorch like you asked :)
-
Will Transformers Take over Artificial Intelligence?
Sure thing. Also if you're getting into transformers I'd recommend lucidrains's GitHub[0] since it has a large collection of them with links to papers. It's nice that things are consolidated.
[0] https://github.com/lucidrains/vit-pytorch
-
[D] Surprisingly Simple SOTA Self-Supervised Pretraining - Masked Autoencoders Are Scalable Vision Learners by Kaiming He et al. explained (5-minute summary by Casual GAN Papers)
nah, it is really simple. here is the code https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/mae.py
-
[D] Training vision transformers on a specific dataset from scratch
lucid rains VI has all of what you may need in a clean API
- Can I train a tranaformer for image classification on Google colab??
-
[R] Rotary Positional Embeddings - a new relative positional embedding for Transformers that significantly improves convergence (20-30%) and works for both regular and efficient attention
I've attempted it here https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/rvt.py but those who have tried it haven't seen knock out results as 1d. Perhaps the axial lengths are too small to see a benefit
reformer-pytorch
-
[D] How to do Long Text ( > 10k tokens) Summarization?
The lucidrains implementation of Reformer can handle tens of thousands of tokens on Google Colab (with batch size 1).
-
[R]How to go about non-reproducible research?
This is what I call great code : https://github.com/lucidrains/reformer-pytorch
What are some alternatives?
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
simpleT5 - simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
DeepPoseKit - a toolkit for pose estimation using deep learning
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
Fast-Transformer - An implementation of Fastformer: Additive Attention Can Be All You Need, a Transformer Variant in TensorFlow
CeiT - Implementation of Convolutional enhanced image Transformer
Compact-Transformers - Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
Conformer - An implementation of Conformer: Convolution-augmented Transformer for Speech Recognition, a Transformer Variant in TensorFlow/Keras
MLP-Mixer-pytorch - Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision
LSTM-FCN - Codebase for the paper LSTM Fully Convolutional Networks for Time Series Classification