clip-interrogator
laion-datasets
clip-interrogator | laion-datasets | |
---|---|---|
27 | 6 | |
2,491 | 213 | |
- | 7.0% | |
4.8 | 0.0 | |
3 months ago | over 1 year ago | |
Python | HTML | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
clip-interrogator
-
AI Horde’s AGPL3 hordelib receives DMCA take-down from hlky
It's image -> words, the inverse of stable diffusion.
see: https://github.com/pharmapsychotic/clip-interrogator
-
What are the "fastest" image classifiers I can use?
I have been using this on a CPU https://github.com/pharmapsychotic/clip-interrogator, I tried a lot of pre-trained models combinations, all are slow.
- -New Monthly Event!-
-
I keep trying to recreate this scene as a painting. But the AI doesn't get it. How do I describe that the man is reaching behind to stab a lion in the head, as the lion has pounced and is biting the rear of the horse. The AI always redraws this without the lion or not how it is shown here.
I'm addition to controlnet, try the clip interrogator to see how clip would describe the image and then use that language in your prompt. You can try the whole image or cropped portions. There is a colab available if you don't want to run it locally.
-
For Lora training, isn’t there a good AI that discribes the pictures you want to use for training?
In my current process, I use CLIP Interrogator to produce a high level caption and wd14 tagger for more granular booru tags. Typically in that order, because you can append the results from the latter to the former. Both tools perform with greater accuracy than the standard interrogators in img2img and give you more flexibility and features as well. You still have to do some manual adjustments, but I generally prefer this process over starting from scratch.
- Midjourney Image2text
-
Tech pioneers call for six-month pause of "out-of-control" AI development
If you are interested in this, definitely see if you can get some of the OSS models running and get a feel for how to interrogate them. Maybe see if you can get some mileage out of the CLIP-Interrogator
-
ChatGPT 3.5 vs 4 & Stable Diffusion
Next, I used the lists of artists, flavors, mediums, movements, and negatives that are used for the clip-interrogator and pasted these in the chat and told the bot to categorize them accordingly. As you can only paste up to certain characters in single message (4-5K in 3.5 and 6-8K in 4).
-
Any idea of what type of prompt has been used to make this?
Here’s the specific one I’m using (runs in browser)
-
CLIP Interrogator 2 locally
I really enjoy using the CLIP Interrogator on huggingspaces, but it is often super slow and sometimes straight up breaks. Now it is possible to locally install it, https://github.com/pharmapsychotic/clip-interrogator but I don't know if its viable to run on a laptop with 6gb videocard anyway.
laion-datasets
-
Valve is reportedly banning games featuring AI generated content
Not true, it uses the MIT license, which allows for any use including commercial. According to the license you could even sell the Laion datasets yourself if you wanted.
- I don't understand why people are so adamant that nobody have fun. Literally nobody is being harmed by screwing around with AI art programs for personal amusement.
-
The AI Art Apocalypse
Datasets can be manually curated to produce more aesthetic results if this becomes a real issue. For example, classifiers can predict whether an image is generated or not. You could adapt the process used to create laion-aesthetic[0] to remove generated images.
[0]: https://github.com/LAION-AI/laion-datasets/blob/main/laion-a...
-
The current model was trained on LAION 2B, a 100 TB dataset containing 2 billion images. If we train on LAION 5B which contains 5 billion images will the quality and prompt understanding go up a lot?
source: https://github.com/LAION-AI/laion-datasets/blob/main/laion-aesthetic.md
-
Open-source rival for OpenAI’s DALL-E runs on your graphics card
My hunch is that is the result of this: https://github.com/CompVis/stable-diffusion#weights
> 515k steps at resolution 512x512 on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size >= 512x512, estimated aesthetics score > 5.0
https://github.com/LAION-AI/laion-datasets/blob/main/laion-a... for more details.
What's remarkable is this: https://github.com/LAION-AI/laion-datasets/blob/main/laion-a...
That aesthetic predictor was apparently trained on only 4000 images. If my thinking is correct, imagine the impact those 4000 ratings have had on all of the output of this model.
You can see samples (some NSFW) of different images from the original training set in different rating buckets here, to get an idea of what was included or not in those training steps. http://3080.rom1504.fr/aesthetic/aesthetic_viz.html
- "Laion-aesthetic is a subset of Laion5B that has been estimated by a model trained on top of CLIP embeddings to be aesthetic"
What are some alternatives?
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
dalle-2-preview
stable-diffusion - A latent text-to-image diffusion model
stable-diffusion-artists - Curated list of artists for Stable Diffusion prompts
dalle-mini - DALL·E Mini - Generate images from a text prompt
hordelib - A wrapper around ComfyUI to allow use by the AI Horde. [UnavailableForLegalReasons - Repository access blocked]
simulacrabot - Discord AI Generation Bot to collect an aesthetic rating dataset
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
autodistill-metaclip - MetaCLIP module for use with Autodistill.
playground - Play with neural networks!
dfserver - A distributed backend AI pipeline server