autodistill-metaclip
MetaCLIP
autodistill-metaclip | MetaCLIP | |
---|---|---|
1 | 5 | |
16 | 1,019 | |
- | 4.6% | |
6.4 | 7.5 | |
5 months ago | 12 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
autodistill-metaclip
-
MetaCLIP – Meta AI Research
I have been playing with MetaCLIP this afternoon and made https://github.com/autodistill/autodistill-metaclip as a pip installable version. The Facebook repository has some guidance but you have to pull the weights yourself, save them, etc.
My inference function (model.predict("image.png")) return an sv.Classifications object that you can load into supervision for processing (i.e. get top k) [1].
The paper [2] notes the following in terms of performance:
> In Table 4, we observe that MetaCLIP outperforms OpenAI CLIP on ImageNet and average accuracy across 26 tasks, for 3 model scales. With 400 million training data points on ViT-B/32, MetaCLIP outperforms CLIP by +2.1% on ImageNet and by +1.6% on average. On ViT-B/16, MetaCLIP outperforms CLIP by +2.5% on ImageNet and by +1.5% on average. On ViT-L/14, MetaCLIP outperforms CLIP by +0.7% on ImageNet and by +1.4% on average across the 26 tasks.
[1] https://github.com/autodistill/autodistill-metaclip
MetaCLIP
-
A History of CLIP Model Training Data Advances
(Github Repo | Most Popular Model | Paper)
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
- MetaCLIP by Meta AI Research
- MetaCLIP – Meta AI Research
What are some alternatives?
clip-interrogator - Image to prompt with BLIP and CLIP
blip-caption - Generate captions for images with Salesforce BLIP
open_clip - An open source implementation of CLIP.
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
NumPyCLIP - Pure NumPy implementation of https://github.com/openai/CLIP
sam-clip - Use Grounding DINO, Segment Anything, and CLIP to label objects in images.
emoji-search-plugin - Semantic Emoji Search Plugin for FiftyOne
Text2LIVE - Official Pytorch Implementation for "Text2LIVE: Text-Driven Layered Image and Video Editing" (ECCV 2022 Oral)
aphantasia - CLIP + FFT/DWT/RGB = text to image/video