clip-interrogator
clip-interrogator | dalle-2-preview | |
---|---|---|
27 | 61 | |
2,491 | 1,049 | |
- | 0.0% | |
4.8 | 1.8 | |
3 months ago | almost 2 years ago | |
Python | ||
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
clip-interrogator
-
AI Horde’s AGPL3 hordelib receives DMCA take-down from hlky
It's image -> words, the inverse of stable diffusion.
see: https://github.com/pharmapsychotic/clip-interrogator
-
What are the "fastest" image classifiers I can use?
I have been using this on a CPU https://github.com/pharmapsychotic/clip-interrogator, I tried a lot of pre-trained models combinations, all are slow.
- -New Monthly Event!-
-
I keep trying to recreate this scene as a painting. But the AI doesn't get it. How do I describe that the man is reaching behind to stab a lion in the head, as the lion has pounced and is biting the rear of the horse. The AI always redraws this without the lion or not how it is shown here.
I'm addition to controlnet, try the clip interrogator to see how clip would describe the image and then use that language in your prompt. You can try the whole image or cropped portions. There is a colab available if you don't want to run it locally.
-
For Lora training, isn’t there a good AI that discribes the pictures you want to use for training?
In my current process, I use CLIP Interrogator to produce a high level caption and wd14 tagger for more granular booru tags. Typically in that order, because you can append the results from the latter to the former. Both tools perform with greater accuracy than the standard interrogators in img2img and give you more flexibility and features as well. You still have to do some manual adjustments, but I generally prefer this process over starting from scratch.
- Midjourney Image2text
-
Tech pioneers call for six-month pause of "out-of-control" AI development
If you are interested in this, definitely see if you can get some of the OSS models running and get a feel for how to interrogate them. Maybe see if you can get some mileage out of the CLIP-Interrogator
-
ChatGPT 3.5 vs 4 & Stable Diffusion
Next, I used the lists of artists, flavors, mediums, movements, and negatives that are used for the clip-interrogator and pasted these in the chat and told the bot to categorize them accordingly. As you can only paste up to certain characters in single message (4-5K in 3.5 and 6-8K in 4).
-
Any idea of what type of prompt has been used to make this?
Here’s the specific one I’m using (runs in browser)
-
CLIP Interrogator 2 locally
I really enjoy using the CLIP Interrogator on huggingspaces, but it is often super slow and sometimes straight up breaks. Now it is possible to locally install it, https://github.com/pharmapsychotic/clip-interrogator but I don't know if its viable to run on a laptop with 6gb videocard anyway.
dalle-2-preview
-
Microsoft-backed OpenAI to let users customize ChatGPT | Reuters
We believe that many decisions about our defaults and hard bounds should be made collectively, and while practical implementation is a challenge, we aim to include as many perspectives as possible. As a starting point, we’ve sought external input on our technology in the form of red teaming. We also recently began soliciting public input on AI in education (one particularly important context in which our technology is being deployed).
- OpenAI AI not available for Algeria, gotta love Algeria
-
The argument against the use of datasets seems ultimately insincere and pointless
From this OpenAI document:
-
Dalle-2 is > 1,000x as dollar efficient as hiring a human illustrator.
It's also of note that you can't sell a game using this method, as Dalle-2's terms of service prevent use in commercial projects. It's hard to justify rate of return considering you can only ever give it away for free, and even in that case there are some uncertain legal elements regarding copyright and the images that are used to train the dataset.
-
It's pretty obvious where dalle-2 gets some of their training data from! Anyone else had the Getty Images watermark? Prompt was "man in a suit standing in a fountain with his hair on fire."
On their GitHub https://github.com/openai/dalle-2-preview/blob/main/system-card.md I can only see references to v1.
-
“Pinterest” for Dalle-2 images and prompts
"b) Exploration of the bolded part of OpenAI's comment "Each generated image includes a signature in the lower right corner, with the goal of indicating when DALL·E 2 helped generate a certain image." (source)." (source link: https://github.com/openai/dalle-2-preview/blob/main/system-c...)
I feel the DALL-E 2 watermark signature could be a seed or something.
- I’m an outsider to digital art and have a couple questions about A.I created art.
-
The AI Art Apocalypse
DALL-E's docs for example mention it can output whole copyrighted logos and characters[1] and understands it's possible to generate human faces that are bear the likeness of those in the training data. We've also seen people recently critique Stable Diffusion's output for attempting to recreate artists' signatures that came from the commercial trained data.
That said by a certain point the kinks will be ironed out and likely skirt around such issues by only incorporating/manipulating just enough to be considered fair use and creative transformation.
[1] "The model can generate known entities including trademarked logos and copyrighted characters." https://github.com/openai/dalle-2-preview/blob/main/system-c...
- Trabalhei no projeto Dall-e, me pergunte qualquer coisa (AMA)
-
Official Dalle server: Why “furry art” is a banned phrase
Some types of content were purposely excluded from the training dataset(s) (source).
What are some alternatives?
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
dalle-mini - DALL·E Mini - Generate images from a text prompt
laion-datasets - Description and pointers of laion datasets
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
stable-diffusion-artists - Curated list of artists for Stable Diffusion prompts
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
hordelib - A wrapper around ComfyUI to allow use by the AI Horde. [UnavailableForLegalReasons - Repository access blocked]
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
disco-diffusion
autodistill-metaclip - MetaCLIP module for use with Autodistill.
glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model