x-clip
CapDec
x-clip | CapDec | |
---|---|---|
1 | 3 | |
685 | 187 | |
- | - | |
5.8 | 5.6 | |
about 1 year ago | 10 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
x-clip
-
[D] Problems with proprietary datasets
Now is it possible that some of these images were a part of train set of these models ? Maybe, but we can't really be sure without having access to the original dataset. To this end, are there any works that study this phenomenon more deeply and technically (with metrics etc.) ? I know few attempts to reproduce DALL-E and CLIP on open datasets but not sure whether such studies have been performed. Unfortunately I lack both the resources as well as technical competency to perform such studies myself but would love to see if you folks know anything about this.
CapDec
- Open source – Unsupervised captioning getting closer to supervised captioning
-
Reverse engineer Stable Diffusion images
Cool! I also how a project that does image captioning: https://github.com/DavidHuji/CapDec
- CapDec: SOTA Zero Shot Image Captioning Using Clip and GPT2
What are some alternatives?
CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
3DCoMPaT-v2 - 3DCoMPaT++: An improved large-scale 3D vision dataset for compositional recognition
VehicleFinder-CTIM
DeepViewAgg - [CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
pytorch-widedeep - A flexible package for multimodal-deep-learning to combine tabular data with text and images using Wide and Deep models in Pytorch
IPViT - Official repository for "Intriguing Properties of Vision Transformers" (NeurIPS 2021--Spotlight)
mmf - A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
GroupViT - Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.
MAGIC - Language Models Can See: Plugging Visual Controls in Text Generation
LAVIS - LAVIS - A One-stop Library for Language-Vision Intelligence