emoji_search
open_clip
emoji_search | open_clip | |
---|---|---|
2 | 30 | |
9 | 8,850 | |
- | 4.5% | |
7.4 | 8.1 | |
7 months ago | 4 days ago | |
Python | Jupyter Notebook | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
emoji_search
-
How to Build a Semantic Search Engine for Emojis
pip install git+https://github.com/jacobmarks/emoji_search.git
By “this”, I mean an open-source semantic emoji search engine, with both UI-centric and CLI versions. The Python CLI library can be found here, and the UI-centric version can be found here. You can also play around with a hosted (also free) version of the UI emoji search engine online here.
open_clip
-
Binarize Clip for Multimodal Applications
The part of CLIP[1] that you need to know to understand this is that it embeds text and images into the same space. ie: the word "dog" is close to images of dogs. Normally this space is a high dimensional real space. Think 512-dimensional or 512 floating point numbers. When you want to measure "closeness" between vectors in this space cosine similarity[2] is a natural choice.
Why would you want to quantize values? Well, instead of using a 32-bit float for each dimension, what if you could get away with 1-bit? You would save you 31x the space. Often you'll want to embed millions or billions of pieces of text or images, so the savings represent a huge speed & cost savings and if accuracy isn't impacted too much then it could be worth it.
If you naively clip the floats of an existing model, it severely impacts accuracy. However, if you train a model from scratch that produces binary outputs, then it appears to perform better.
There is one twist. Deep learning models rely on gradient descent to train and binary output doesn't produce useful gradients. We use cosine similarity on floating point vectors and hamming distance on bit vectors. Is there a function that behaves like hamming distance but is nicely differentiable? We can then use this function during training and then vanilla hamming distance during inference. It seems like they've done that.
I'd suggest playing around with OpenCLIP[3]. My background is in data science but all my CLIP knowledge comes from doing a side project over the course of a couple weekends.
1. https://huggingface.co/docs/transformers/model_doc/clip
2. https://en.wikipedia.org/wiki/Cosine_similarity
3. https://github.com/mlfoundations/open_clip
- FLaNK-AIM: 20 May 2024 Weekly
- FLaNK AI Weekly for 29 April 2024
-
A History of CLIP Model Training Data Advances
While OpenAI’s CLIP model has garnered a lot of attention, it is far from the only game in town—and far from the best! On the OpenCLIP leaderboard, for instance, the largest and most capable CLIP model from OpenAI ranks just 41st(!) in its average zero-shot accuracy across 38 datasets.
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
-
Database of 16,000 Artists Used to Train Midjourney AI Goes Viral
It is a misconception that Adobe's models have not been trained on copyrighted work. Nobody should be repeating their marketing claims.
Adobe has not shown how they train the text encoders in Firefly, or what images were used for the text-based conditioning (i.e. "text to image") part of their image generation model. They are almost certainly using CLIP or T5, which are trained on LAION2b, an image dataset with the very problems they are trying to address, C4 (a text dataset similarly encumbered) and similar.
I welcome anyone who works at Adobe to simply answer this question of how they trained the text encoders for text conditioning and put it to rest. There is absolutely nothing sensitive about the issue, unless it exposes them in a lie.
So no chance. I think it's a big fat lie. They'd have to have made some other scientific breakthrough, which they didn't.
Using information from https://openai.com/research/clip and https://github.com/mlfoundations/open_clip, it's possible to investigate the likelihood that using just their stock image dataset, can they make a working text encoder?
It's certainly not impossible, but it's impracticable. On 248m images (roughly the size of Adobe Stock), CLIP gets 37% on ImageNet, and on the 2000m from LAION, it performs 71-80%. And even with 2000m images, CLIP is substantially worse performing than the approach that Imagen uses for "text comprehension," which relies on essentially many billions more images and text tokens.
-
MetaCLIP – Meta AI Research
https://github.com/mlfoundations/open_clip/blob/main/docs/op...
-
COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
-
Is Nicholas Renotte a good guide for a person who knows nothing about ML?
also, if you describe your task a bit more, we might be able to direct you to a fairly out-of-the-box solution, e.g. you might be able to use one of the pretrained models supported by https://github.com/mlfoundations/open_clip without any additional training
-
Generate Image from Vector Embedding
It says on the Stable Diffusion Github repo that it uses the “OpenCLIP-ViT/H” https://github.com/mlfoundations/open_clip model as a text encoder, and from my prior experience with CLIP, I have found that it is very easy to generate image and text embeddings (because CLIP is a multimodal model).
What are some alternatives?
emoji-search-plugin - Semantic Emoji Search Plugin for FiftyOne
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
uform - Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and 🔜 video, up to 5x faster than OpenAI CLIP and LLaVA 🖼️ & 🖋️
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
fiftyone - The open-source tool for building high-quality datasets and computer vision models
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
clip-retrieval - Easily compute clip embeddings and build a clip retrieval system with them
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
ComfyUI-Manager - ComfyUI-Manager is an extension designed to enhance the usability of ComfyUI. It offers management functions to install, remove, disable, and enable various custom nodes of ComfyUI. Furthermore, this extension provides a hub feature and convenience functions to access a wide range of information within ComfyUI.