convolution-vision-transformers
vit-pytorch
Our great sponsors
convolution-vision-transformers | vit-pytorch | |
---|---|---|
2 | 11 | |
210 | 17,517 | |
- | - | |
0.0 | 7.3 | |
almost 3 years ago | 3 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
convolution-vision-transformers
vit-pytorch
-
Transformers in RL
Here's a pytorch implementation of ViT https://github.com/lucidrains/vit-pytorch
-
[P] Release the Vision Transformer Cookbook with Tensorflow ! (Thanks to @lucidrains)
looks great Junho! i've linked to it from https://github.com/lucidrains/vit-pytorch like you asked :)
-
Will Transformers Take over Artificial Intelligence?
Sure thing. Also if you're getting into transformers I'd recommend lucidrains's GitHub[0] since it has a large collection of them with links to papers. It's nice that things are consolidated.
-
[R] Rotary Positional Embeddings - a new relative positional embedding for Transformers that significantly improves convergence (20-30%) and works for both regular and efficient attention
I've attempted it here https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/rvt.py but those who have tried it haven't seen knock out results as 1d. Perhaps the axial lengths are too small to see a benefit
What are some alternatives?
MLP-Mixer-pytorch - Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision
reformer-pytorch - Reformer, the efficient Transformer, in Pytorch
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
efficient-attention - An implementation of the efficient attention module.
Compact-Transformers - Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
EasyCV - An all-in-one toolkit for computer vision
CeiT - Implementation of Convolutional enhanced image Transformer
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
memory-efficient-attention-pyt
vit-tensorflow - Vision Transformer Cookbook with Tensorflow