MetaCLIP
ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering (by facebookresearch)
emoji-search-plugin
Semantic Emoji Search Plugin for FiftyOne (by jacobmarks)
MetaCLIP | emoji-search-plugin | |
---|---|---|
5 | 1 | |
1,019 | 5 | |
4.6% | - | |
7.5 | 6.3 | |
13 days ago | about 1 month ago | |
Python | Python | |
GNU General Public License v3.0 or later | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MetaCLIP
Posts with mentions or reviews of MetaCLIP.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-03-13.
-
A History of CLIP Model Training Data Advances
(Github Repo | Most Popular Model | Paper)
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
- MetaCLIP by Meta AI Research
- MetaCLIP – Meta AI Research
emoji-search-plugin
Posts with mentions or reviews of emoji-search-plugin.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-01-10.
-
How to Build a Semantic Search Engine for Emojis
By “this”, I mean an open-source semantic emoji search engine, with both UI-centric and CLI versions. The Python CLI library can be found here, and the UI-centric version can be found here. You can also play around with a hosted (also free) version of the UI emoji search engine online here.
What are some alternatives?
When comparing MetaCLIP and emoji-search-plugin you can also consider the following projects:
blip-caption - Generate captions for images with Salesforce BLIP
emoji_search - Semantically Search Emojis From the Command Line!
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
sahi - Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
autodistill-metaclip - MetaCLIP module for use with Autodistill.
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
NumPyCLIP - Pure NumPy implementation of https://github.com/openai/CLIP
open_clip - An open source implementation of CLIP.