x-clip
A concise but complete implementation of CLIP with various experimental improvements from recent papers (by lucidrains)
IPViT
Official repository for "Intriguing Properties of Vision Transformers" (NeurIPS 2021--Spotlight) (by Muzammal-Naseer)
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
x-clip
Posts with mentions or reviews of x-clip.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-05-19.
-
[D] Problems with proprietary datasets
Now is it possible that some of these images were a part of train set of these models ? Maybe, but we can't really be sure without having access to the original dataset. To this end, are there any works that study this phenomenon more deeply and technically (with metrics etc.) ? I know few attempts to reproduce DALL-E and CLIP on open datasets but not sure whether such studies have been performed. Unfortunately I lack both the resources as well as technical competency to perform such studies myself but would love to see if you folks know anything about this.
IPViT
Posts with mentions or reviews of IPViT.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-05-19.
-
[D] Problems with proprietary datasets
Code for https://arxiv.org/abs/2105.10497 found: https://github.com/Muzammal-Naseer/Intriguing-Properties-of-Vision-Transformers
-
[R] Intriguing Properties of Vision Transformers
Retrained model, evaluation and training code is on this repo!
What are some alternatives?
When comparing x-clip and IPViT you can also consider the following projects:
CapDec - CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
ViTs-vs-CNNs - [NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)
VehicleFinder-CTIM
GroupViT - Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.