vit-pytorch VS CeiT

Compare vit-pytorch vs CeiT and see what are their differences.

vit-pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch (by lucidrains)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
vit-pytorch CeiT
11 1
18,006 95
- -
7.3 0.0
10 days ago about 3 years ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

vit-pytorch

Posts with mentions or reviews of vit-pytorch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-15.

CeiT

Posts with mentions or reviews of CeiT. We have used some of these posts to build our list of alternatives and similar projects.

What are some alternatives?

When comparing vit-pytorch and CeiT you can also consider the following projects:

MLP-Mixer-pytorch - Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision

T2T-ViT - ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

convolution-vision-transformers - PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers

efficientnet - Implementation of EfficientNet model. Keras and TensorFlow Keras.

reformer-pytorch - Reformer, the efficient Transformer, in Pytorch

performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch

dytox - Dynamic Token Expansion with Continual Transformers, accepted at CVPR 2022

DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

efficient-attention - An implementation of the efficient attention module.

Compact-Transformers - Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)

memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"