awesome-clip-papers
The most impactful papers related to contrastive pretraining for multimodal models! (by jacobmarks)
YOLO-World
[CVPR 2024] Real-Time Open-Vocabulary Object Detection (by AILab-CVC)
awesome-clip-papers | YOLO-World | |
---|---|---|
1 | 3 | |
16 | 3,480 | |
- | 14.4% | |
5.4 | 9.0 | |
2 months ago | 8 days ago | |
Python | Python | |
- | GNU General Public License v3.0 only |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-clip-papers
Posts with mentions or reviews of awesome-clip-papers.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-03-13.
-
A History of CLIP Model Training Data Advances
For a comprehensive catalog of papers pushing the state of CLIP models forward, check out this Awesome CLIP Papers Github repository. Additionally, the Zero-shot Prediction Plugin for FiftyOne allows you to apply any of the OpenCLIP-compatible models to your own data.
YOLO-World
Posts with mentions or reviews of YOLO-World.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-03-13.
-
A History of CLIP Model Training Data Advances
2024 is shaping up to be the year of multimodal machine learning. From real-time text-to-image models and open-world vocabulary models to multimodal large language models like GPT-4V and Gemini Pro Vision, AI is primed for an unprecedented array of interactive multimodal applications and experiences.
- FLaNK Stack Weekly 19 Feb 2024
-
Making My Bookshelves Clickable
Post author here. I like this idea. I plan to explore it and make a more generic solution. I'd love to have a point-and-click interface for annotating scenes.
For example, I'd like to be able to click on pieces of coffee equipment in a photo of my coffee setup so I can add sticky note annotations when you hover over each item.
For the bookshelves idea specifically, I would love to have a correction system in place. The problem isn't so much SAM as it is Grounding DINO, the model I'm using for object identification. I then pass each identified region to SAM and map the segmentation mask to the box.
Grounding DINO detects a lot of book spines, but often misses 1-2. I am planning to try out YOLO-World (https://github.com/AILab-CVC/YOLO-World), which, in my limited testing, performs better for this task.