MetaCLIP
YOLO-World
MetaCLIP | YOLO-World | |
---|---|---|
5 | 3 | |
1,019 | 3,442 | |
4.6% | 13.4% | |
7.5 | 9.0 | |
12 days ago | 5 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MetaCLIP
-
A History of CLIP Model Training Data Advances
(Github Repo | Most Popular Model | Paper)
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
- MetaCLIP by Meta AI Research
- MetaCLIP – Meta AI Research
YOLO-World
-
A History of CLIP Model Training Data Advances
2024 is shaping up to be the year of multimodal machine learning. From real-time text-to-image models and open-world vocabulary models to multimodal large language models like GPT-4V and Gemini Pro Vision, AI is primed for an unprecedented array of interactive multimodal applications and experiences.
- FLaNK Stack Weekly 19 Feb 2024
-
Making My Bookshelves Clickable
Post author here. I like this idea. I plan to explore it and make a more generic solution. I'd love to have a point-and-click interface for annotating scenes.
For example, I'd like to be able to click on pieces of coffee equipment in a photo of my coffee setup so I can add sticky note annotations when you hover over each item.
For the bookshelves idea specifically, I would love to have a correction system in place. The problem isn't so much SAM as it is Grounding DINO, the model I'm using for object identification. I then pass each identified region to SAM and map the segmentation mask to the box.
Grounding DINO detects a lot of book spines, but often misses 1-2. I am planning to try out YOLO-World (https://github.com/AILab-CVC/YOLO-World), which, in my limited testing, performs better for this task.
What are some alternatives?
blip-caption - Generate captions for images with Salesforce BLIP
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
autodistill-metaclip - MetaCLIP module for use with Autodistill.
NumPyCLIP - Pure NumPy implementation of https://github.com/openai/CLIP
open_clip - An open source implementation of CLIP.
emoji-search-plugin - Semantic Emoji Search Plugin for FiftyOne