CLIP
dalle-2-preview | CLIP | |
---|---|---|
61 | 105 | |
1,044 | 27,161 | |
0.0% | 1.7% | |
1.8 | 2.4 | |
over 2 years ago | 7 months ago | |
Jupyter Notebook | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dalle-2-preview
-
Microsoft-backed OpenAI to let users customize ChatGPT | Reuters
We believe that many decisions about our defaults and hard bounds should be made collectively, and while practical implementation is a challenge, we aim to include as many perspectives as possible. As a starting point, we’ve sought external input on our technology in the form of red teaming. We also recently began soliciting public input on AI in education (one particularly important context in which our technology is being deployed).
- OpenAI AI not available for Algeria, gotta love Algeria
-
The argument against the use of datasets seems ultimately insincere and pointless
From this OpenAI document:
-
Dalle-2 is > 1,000x as dollar efficient as hiring a human illustrator.
It's also of note that you can't sell a game using this method, as Dalle-2's terms of service prevent use in commercial projects. It's hard to justify rate of return considering you can only ever give it away for free, and even in that case there are some uncertain legal elements regarding copyright and the images that are used to train the dataset.
-
It's pretty obvious where dalle-2 gets some of their training data from! Anyone else had the Getty Images watermark? Prompt was "man in a suit standing in a fountain with his hair on fire."
On their GitHub https://github.com/openai/dalle-2-preview/blob/main/system-card.md I can only see references to v1.
-
“Pinterest” for Dalle-2 images and prompts
"b) Exploration of the bolded part of OpenAI's comment "Each generated image includes a signature in the lower right corner, with the goal of indicating when DALL·E 2 helped generate a certain image." (source)." (source link: https://github.com/openai/dalle-2-preview/blob/main/system-c...)
I feel the DALL-E 2 watermark signature could be a seed or something.
- I’m an outsider to digital art and have a couple questions about A.I created art.
-
The AI Art Apocalypse
DALL-E's docs for example mention it can output whole copyrighted logos and characters[1] and understands it's possible to generate human faces that are bear the likeness of those in the training data. We've also seen people recently critique Stable Diffusion's output for attempting to recreate artists' signatures that came from the commercial trained data.
That said by a certain point the kinks will be ironed out and likely skirt around such issues by only incorporating/manipulating just enough to be considered fair use and creative transformation.
[1] "The model can generate known entities including trademarked logos and copyrighted characters." https://github.com/openai/dalle-2-preview/blob/main/system-c...
- Trabalhei no projeto Dall-e, me pergunte qualquer coisa (AMA)
-
Official Dalle server: Why “furry art” is a banned phrase
Some types of content were purposely excluded from the training dataset(s) (source).
CLIP
-
We used GPT-4o for image detection with 350 similar illustrations
Yes, you could implement image similarity search using embeddings: Create embeddings for the entire image set, save the embeddings in a database, and add embeddings incrementally as new images come in. To search for a similar image, create the embedding for the image that you are looking for and compute the cosine similarity between that embedding and the embeddings in your database. The closer the cosine similarity is to 1.0 the more similar the images.
For choosing a model, the article mentions the AWS Titan multimodal model, but you’d have to pay for API access to create the embeddings. Alternatively, self-hosting the CLIP model [0] to create embeddings would avoid API costs.
Follow-up question: Would the embeddings from the llama3.2-vision models be of higher quality (contain more information) than the original CLIP model?
The llama vision models use CLIP under the hood, but they add a projection head to align with the text model and the CLIP weights are mutated during alignment training, so I assume the llama vision embeddings would be of higher quality, but I don’t know for sure. Does anybody know?
(I would love to test this quality myself but Ollama does not yet support creating image embeddings from the llama vision models - a feature request with several upvotes has been opened [1].)
[0] https://github.com/openai/CLIP
-
Anomaly Detection with FiftyOne and Anomalib
pip install -U huggingface_hub umap-learn git+https://github.com/openai/CLIP.git
-
How to Cluster Images
We will also need two more libraries: OpenAI’s CLIP GitHub repo, enabling us to generate image features with the CLIP model, and the umap-learn library, which will let us apply a dimensionality reduction technique called Uniform Manifold Approximation and Projection (UMAP) to those features to visualize them in 2D:
-
Show HN: Memories, FOSS Google Photos alternative built for high performance
Biggest missing feature for all these self hosted photo hosting is the lack of a real search. Being able to search for things like "beach at night" is a time saver instead of browsing through hundreds or thousands of photos. There are trained neural networks out there like https://github.com/openai/CLIP which are quite good.
-
Zero-Shot Prediction Plugin for FiftyOne
In computer vision, this is known as zero-shot learning, or zero-shot prediction, because the goal is to generate predictions without explicitly being given any example predictions to learn from. With the advent of high quality multimodal models like CLIP and foundation models like Segment Anything, it is now possible to generate remarkably good zero-shot predictions for a variety of computer vision tasks, including:
-
A History of CLIP Model Training Data Advances
(Github Repo | Most Popular Model | Paper | Project Page)
-
NLP Algorithms for Clustering AI Content Search Keywords
the first thing that comes to mind is CLIP: https://github.com/openai/CLIP
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
-
COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
-
Stability Matrix v1.1.0 - Portable mode, Automatic updates, Revamped console, and more
Command: "C:\StabilityMatrix\Packages\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip --prefer-binary
What are some alternatives?
dalle-mini - DALL·E Mini - Generate images from a text prompt
open_clip - An open source implementation of CLIP.
glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
sentence-transformers - State-of-the-Art Text Embeddings
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
segment-anything - The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
clip-interrogator - Image to prompt with BLIP and CLIP
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
disco-diffusion