clip-interrogator
stable-diffusion-artists
clip-interrogator | stable-diffusion-artists | |
---|---|---|
27 | 4 | |
2,491 | 149 | |
- | - | |
4.8 | 10.0 | |
3 months ago | over 1 year ago | |
Python | ||
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
clip-interrogator
-
AI Horde’s AGPL3 hordelib receives DMCA take-down from hlky
It's image -> words, the inverse of stable diffusion.
see: https://github.com/pharmapsychotic/clip-interrogator
-
What are the "fastest" image classifiers I can use?
I have been using this on a CPU https://github.com/pharmapsychotic/clip-interrogator, I tried a lot of pre-trained models combinations, all are slow.
- -New Monthly Event!-
-
I keep trying to recreate this scene as a painting. But the AI doesn't get it. How do I describe that the man is reaching behind to stab a lion in the head, as the lion has pounced and is biting the rear of the horse. The AI always redraws this without the lion or not how it is shown here.
I'm addition to controlnet, try the clip interrogator to see how clip would describe the image and then use that language in your prompt. You can try the whole image or cropped portions. There is a colab available if you don't want to run it locally.
-
For Lora training, isn’t there a good AI that discribes the pictures you want to use for training?
In my current process, I use CLIP Interrogator to produce a high level caption and wd14 tagger for more granular booru tags. Typically in that order, because you can append the results from the latter to the former. Both tools perform with greater accuracy than the standard interrogators in img2img and give you more flexibility and features as well. You still have to do some manual adjustments, but I generally prefer this process over starting from scratch.
- Midjourney Image2text
-
Tech pioneers call for six-month pause of "out-of-control" AI development
If you are interested in this, definitely see if you can get some of the OSS models running and get a feel for how to interrogate them. Maybe see if you can get some mileage out of the CLIP-Interrogator
-
ChatGPT 3.5 vs 4 & Stable Diffusion
Next, I used the lists of artists, flavors, mediums, movements, and negatives that are used for the clip-interrogator and pasted these in the chat and told the bot to categorize them accordingly. As you can only paste up to certain characters in single message (4-5K in 3.5 and 6-8K in 4).
-
Any idea of what type of prompt has been used to make this?
Here’s the specific one I’m using (runs in browser)
-
CLIP Interrogator 2 locally
I really enjoy using the CLIP Interrogator on huggingspaces, but it is often super slow and sometimes straight up breaks. Now it is possible to locally install it, https://github.com/pharmapsychotic/clip-interrogator but I don't know if its viable to run on a laptop with 6gb videocard anyway.
stable-diffusion-artists
-
Luddite Anti-AI artists have already lost.
It wouldn't have "tags" (is that the right term?) for the names of present day artists who's work isn't public domain, so you wouldn't be able to prompt with the names of living artists (like this)
-
A whole new universe to explore with electron microscope photography
Idea1: try "microbe bacteria", "microbe virus", "microbe water bear" Idea2: prefix prompt with "cute" Idea3: Different artists will give significantly different results, you can find a good list to experiment with here https://github.com/kaikalii/stable-diffusion-artists Idea4: add "vivid colors"
-
What are some good artist and style prompts?
I know NAI was trained on Danbooru, but that dataset is huge, so it can't possibly have been trained on the entirety of it. I am wondering if there is something like an artist list for NovelAI similar to the one for Stable Diffusion.
-
sdartists.app - Curated list of artists identified in the model
I started working on this site a little over a month ago having just started fiddling with Stable Diffusion on my vastly underpowered gaming PC. My curiosity of what artists the model "understood" led me to track down a github repo by kaikalii. kaikalii's process for processing and documenting identified artists was excellent and I submitted a few I found to the repo. I found that I wanted a few additional features, though: proper tags, a random artist lookup, searching, and some real world application of the artist prompts. I built out the site, which took about a week, and went about generating metadata and curated prompts, which took a reaaaaaally long time (especially my very snarky comments on the particularly bad prompts ). In the end, between kaikalii's list and my own research, I ended up with 93 artists and a little over 1.8K images at time of "launch". I have a fairly large backlog of other prompts to try out, but wanted to get this out in the wild before I continued my research. Here's a little sampling:
What are some alternatives?
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
laion-datasets - Description and pointers of laion datasets
dalle-2-preview
hordelib - A wrapper around ComfyUI to allow use by the AI Horde. [UnavailableForLegalReasons - Repository access blocked]
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
autodistill-metaclip - MetaCLIP module for use with Autodistill.
playground - Play with neural networks!
dfserver - A distributed backend AI pipeline server
dspy - DSPy: The framework for programming—not prompting—foundation models
pyllama - LLaMA: Open and Efficient Foundation Language Models
stable-diffusion-webui - Stable Diffusion web UI