open_clip
stable-diffusion-webui
Our great sponsors
open_clip | stable-diffusion-webui | |
---|---|---|
27 | 2,808 | |
8,452 | 129,299 | |
8.2% | - | |
8.2 | 9.9 | |
17 days ago | 6 days ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | MIT |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open_clip
-
A History of CLIP Model Training Data Advances
While OpenAI’s CLIP model has garnered a lot of attention, it is far from the only game in town—and far from the best! On the OpenCLIP leaderboard, for instance, the largest and most capable CLIP model from OpenAI ranks just 41st(!) in its average zero-shot accuracy across 38 datasets.
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
-
Database of 16,000 Artists Used to Train Midjourney AI Goes Viral
It is a misconception that Adobe's models have not been trained on copyrighted work. Nobody should be repeating their marketing claims.
Adobe has not shown how they train the text encoders in Firefly, or what images were used for the text-based conditioning (i.e. "text to image") part of their image generation model. They are almost certainly using CLIP or T5, which are trained on LAION2b, an image dataset with the very problems they are trying to address, C4 (a text dataset similarly encumbered) and similar.
I welcome anyone who works at Adobe to simply answer this question of how they trained the text encoders for text conditioning and put it to rest. There is absolutely nothing sensitive about the issue, unless it exposes them in a lie.
So no chance. I think it's a big fat lie. They'd have to have made some other scientific breakthrough, which they didn't.
Using information from https://openai.com/research/clip and https://github.com/mlfoundations/open_clip, it's possible to investigate the likelihood that using just their stock image dataset, can they make a working text encoder?
It's certainly not impossible, but it's impracticable. On 248m images (roughly the size of Adobe Stock), CLIP gets 37% on ImageNet, and on the 2000m from LAION, it performs 71-80%. And even with 2000m images, CLIP is substantially worse performing than the approach that Imagen uses for "text comprehension," which relies on essentially many billions more images and text tokens.
-
MetaCLIP – Meta AI Research
https://github.com/mlfoundations/open_clip/blob/main/docs/op...
-
COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
-
Is Nicholas Renotte a good guide for a person who knows nothing about ML?
also, if you describe your task a bit more, we might be able to direct you to a fairly out-of-the-box solution, e.g. you might be able to use one of the pretrained models supported by https://github.com/mlfoundations/open_clip without any additional training
-
Generate Image from Vector Embedding
It says on the Stable Diffusion Github repo that it uses the “OpenCLIP-ViT/H” https://github.com/mlfoundations/open_clip model as a text encoder, and from my prior experience with CLIP, I have found that it is very easy to generate image and text embeddings (because CLIP is a multimodal model).
-
What's up in the Python community? – April 2023
https://replicate.com/pharmapsychotic/clip-interrogator
using:
cfg.apply_low_vram_defaults()
interrogate_fast()
I tried lighter models like vit32/laion400 and others etc all are very very slow to load or use (model list: https://github.com/mlfoundations/open_clip)
I'm desperately looking for something more modest and light.
-
Low accuracy on my CNN model.
A library that is very useful for this kind of application is timm. You may also find the feature representation provided by a CLIP model particularly powerful.
- Looking for OpenAI CLIP alternative
stable-diffusion-webui
-
Show HN: I made an app to use local AI as daily driver
* LLaVA model: I'll add more documentation. You are right Llava could not generate images. For image generation I don't have immediate plans, but checkout these projects for local image generation.
- https://diffusionbee.com/
- https://github.com/comfyanonymous/ComfyUI
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I would love to be able to have a native stable diffusion experience, my rx 580 takes 30s to generate a single image. But it does work after following https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki...
I got this up and running on my windows machine in short order and I don't even know what stable diffusion is.
But again, it would be nice to have first class support to locally participate in the fun.
-
Ask HN: What is the state of the art in AI photo enhancement?
In Auto1111, that just uses Image.blend. :)
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob...
- How To Increase Performance Time on MacOS
-
Can anyone suggest an AI model that can help me enhance a poorly drawn logo?
I used SDXL in automatic1111 webui for both images. Now that I think about it, the procedure I described was how I made this one, but the one that looks like an illustration was done in two steps. I used the canny ControlNet as I said for the outer part of the logo to preserve the shape of the fonts, but I had to turn it off for the boot to give SDXL leeway to add detail and make it look more like a boot.
-
Seeking out an experienced and empathetic coding buddy.
That said, please do learn coding and don't get discouraged when somebody says to learn PyTorch or recommends using a Jupiter notebook with no further information on how to translate the skill into images. I would highly recommend some short term goals. Get your feet wet by taking apart the UIs. The comfy API documentation is here and the A1111 API documentation is here. There is a difference in completeness, welcome to programming. Writing nodes or plugins is also a good way to jump into this world. Custom wildcard logic might be very attractive to you if you aren't the type that want to deal with a nested file structure to simulate logic.
- can't get it working with an AMD gpu
-
SD extension that allows for setting override
Possibly Unprompted? https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8094
- Need to write an application to use Stable Diffusion on my desktop PC - which resource should I learn to use?
-
4090 Speed Decrease on each Generation/Iteration
version: v1.6.1 • python: 3.10.13 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2 • checkpoint: 6e8d4871f8
What are some alternatives?
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
SHARK - SHARK - High Performance Machine Learning Distribution
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
clip-retrieval - Easily compute clip embeddings and build a clip retrieval system with them
safetensors - Simple, safe way to store and distribute tensors