autodistill-metaclip
open_clip
autodistill-metaclip | open_clip | |
---|---|---|
1 | 28 | |
16 | 8,499 | |
- | 3.9% | |
6.4 | 8.2 | |
5 months ago | 26 days ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
autodistill-metaclip
-
MetaCLIP – Meta AI Research
I have been playing with MetaCLIP this afternoon and made https://github.com/autodistill/autodistill-metaclip as a pip installable version. The Facebook repository has some guidance but you have to pull the weights yourself, save them, etc.
My inference function (model.predict("image.png")) return an sv.Classifications object that you can load into supervision for processing (i.e. get top k) [1].
The paper [2] notes the following in terms of performance:
> In Table 4, we observe that MetaCLIP outperforms OpenAI CLIP on ImageNet and average accuracy across 26 tasks, for 3 model scales. With 400 million training data points on ViT-B/32, MetaCLIP outperforms CLIP by +2.1% on ImageNet and by +1.6% on average. On ViT-B/16, MetaCLIP outperforms CLIP by +2.5% on ImageNet and by +1.5% on average. On ViT-L/14, MetaCLIP outperforms CLIP by +0.7% on ImageNet and by +1.4% on average across the 26 tasks.
[1] https://github.com/autodistill/autodistill-metaclip
open_clip
- FLaNK AI Weekly for 29 April 2024
-
A History of CLIP Model Training Data Advances
While OpenAI’s CLIP model has garnered a lot of attention, it is far from the only game in town—and far from the best! On the OpenCLIP leaderboard, for instance, the largest and most capable CLIP model from OpenAI ranks just 41st(!) in its average zero-shot accuracy across 38 datasets.
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
-
Database of 16,000 Artists Used to Train Midjourney AI Goes Viral
It is a misconception that Adobe's models have not been trained on copyrighted work. Nobody should be repeating their marketing claims.
Adobe has not shown how they train the text encoders in Firefly, or what images were used for the text-based conditioning (i.e. "text to image") part of their image generation model. They are almost certainly using CLIP or T5, which are trained on LAION2b, an image dataset with the very problems they are trying to address, C4 (a text dataset similarly encumbered) and similar.
I welcome anyone who works at Adobe to simply answer this question of how they trained the text encoders for text conditioning and put it to rest. There is absolutely nothing sensitive about the issue, unless it exposes them in a lie.
So no chance. I think it's a big fat lie. They'd have to have made some other scientific breakthrough, which they didn't.
Using information from https://openai.com/research/clip and https://github.com/mlfoundations/open_clip, it's possible to investigate the likelihood that using just their stock image dataset, can they make a working text encoder?
It's certainly not impossible, but it's impracticable. On 248m images (roughly the size of Adobe Stock), CLIP gets 37% on ImageNet, and on the 2000m from LAION, it performs 71-80%. And even with 2000m images, CLIP is substantially worse performing than the approach that Imagen uses for "text comprehension," which relies on essentially many billions more images and text tokens.
-
MetaCLIP – Meta AI Research
https://github.com/mlfoundations/open_clip/blob/main/docs/op...
-
COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
-
Is Nicholas Renotte a good guide for a person who knows nothing about ML?
also, if you describe your task a bit more, we might be able to direct you to a fairly out-of-the-box solution, e.g. you might be able to use one of the pretrained models supported by https://github.com/mlfoundations/open_clip without any additional training
-
Generate Image from Vector Embedding
It says on the Stable Diffusion Github repo that it uses the “OpenCLIP-ViT/H” https://github.com/mlfoundations/open_clip model as a text encoder, and from my prior experience with CLIP, I have found that it is very easy to generate image and text embeddings (because CLIP is a multimodal model).
-
What's up in the Python community? – April 2023
https://replicate.com/pharmapsychotic/clip-interrogator
using:
cfg.apply_low_vram_defaults()
interrogate_fast()
I tried lighter models like vit32/laion400 and others etc all are very very slow to load or use (model list: https://github.com/mlfoundations/open_clip)
I'm desperately looking for something more modest and light.
-
Low accuracy on my CNN model.
A library that is very useful for this kind of application is timm. You may also find the feature representation provided by a CLIP model particularly powerful.
What are some alternatives?
clip-interrogator - Image to prompt with BLIP and CLIP
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
NumPyCLIP - Pure NumPy implementation of https://github.com/openai/CLIP
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
sam-clip - Use Grounding DINO, Segment Anything, and CLIP to label objects in images.
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
Text2LIVE - Official Pytorch Implementation for "Text2LIVE: Text-Driven Layered Image and Video Editing" (ECCV 2022 Oral)
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
clip-retrieval - Easily compute clip embeddings and build a clip retrieval system with them