laion-datasets
dalle-2-preview | laion-datasets | |
---|---|---|
61 | 6 | |
1,049 | 213 | |
0.0% | 7.0% | |
1.8 | 0.0 | |
almost 2 years ago | over 1 year ago | |
HTML | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dalle-2-preview
-
Microsoft-backed OpenAI to let users customize ChatGPT | Reuters
We believe that many decisions about our defaults and hard bounds should be made collectively, and while practical implementation is a challenge, we aim to include as many perspectives as possible. As a starting point, we’ve sought external input on our technology in the form of red teaming. We also recently began soliciting public input on AI in education (one particularly important context in which our technology is being deployed).
- OpenAI AI not available for Algeria, gotta love Algeria
-
The argument against the use of datasets seems ultimately insincere and pointless
From this OpenAI document:
-
Dalle-2 is > 1,000x as dollar efficient as hiring a human illustrator.
It's also of note that you can't sell a game using this method, as Dalle-2's terms of service prevent use in commercial projects. It's hard to justify rate of return considering you can only ever give it away for free, and even in that case there are some uncertain legal elements regarding copyright and the images that are used to train the dataset.
-
It's pretty obvious where dalle-2 gets some of their training data from! Anyone else had the Getty Images watermark? Prompt was "man in a suit standing in a fountain with his hair on fire."
On their GitHub https://github.com/openai/dalle-2-preview/blob/main/system-card.md I can only see references to v1.
-
“Pinterest” for Dalle-2 images and prompts
"b) Exploration of the bolded part of OpenAI's comment "Each generated image includes a signature in the lower right corner, with the goal of indicating when DALL·E 2 helped generate a certain image." (source)." (source link: https://github.com/openai/dalle-2-preview/blob/main/system-c...)
I feel the DALL-E 2 watermark signature could be a seed or something.
- I’m an outsider to digital art and have a couple questions about A.I created art.
-
The AI Art Apocalypse
DALL-E's docs for example mention it can output whole copyrighted logos and characters[1] and understands it's possible to generate human faces that are bear the likeness of those in the training data. We've also seen people recently critique Stable Diffusion's output for attempting to recreate artists' signatures that came from the commercial trained data.
That said by a certain point the kinks will be ironed out and likely skirt around such issues by only incorporating/manipulating just enough to be considered fair use and creative transformation.
[1] "The model can generate known entities including trademarked logos and copyrighted characters." https://github.com/openai/dalle-2-preview/blob/main/system-c...
- Trabalhei no projeto Dall-e, me pergunte qualquer coisa (AMA)
-
Official Dalle server: Why “furry art” is a banned phrase
Some types of content were purposely excluded from the training dataset(s) (source).
laion-datasets
-
Valve is reportedly banning games featuring AI generated content
Not true, it uses the MIT license, which allows for any use including commercial. According to the license you could even sell the Laion datasets yourself if you wanted.
- I don't understand why people are so adamant that nobody have fun. Literally nobody is being harmed by screwing around with AI art programs for personal amusement.
-
The AI Art Apocalypse
Datasets can be manually curated to produce more aesthetic results if this becomes a real issue. For example, classifiers can predict whether an image is generated or not. You could adapt the process used to create laion-aesthetic[0] to remove generated images.
[0]: https://github.com/LAION-AI/laion-datasets/blob/main/laion-a...
-
The current model was trained on LAION 2B, a 100 TB dataset containing 2 billion images. If we train on LAION 5B which contains 5 billion images will the quality and prompt understanding go up a lot?
source: https://github.com/LAION-AI/laion-datasets/blob/main/laion-aesthetic.md
-
Open-source rival for OpenAI’s DALL-E runs on your graphics card
My hunch is that is the result of this: https://github.com/CompVis/stable-diffusion#weights
> 515k steps at resolution 512x512 on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size >= 512x512, estimated aesthetics score > 5.0
https://github.com/LAION-AI/laion-datasets/blob/main/laion-a... for more details.
What's remarkable is this: https://github.com/LAION-AI/laion-datasets/blob/main/laion-a...
That aesthetic predictor was apparently trained on only 4000 images. If my thinking is correct, imagine the impact those 4000 ratings have had on all of the output of this model.
You can see samples (some NSFW) of different images from the original training set in different rating buckets here, to get an idea of what was included or not in those training steps. http://3080.rom1504.fr/aesthetic/aesthetic_viz.html
- "Laion-aesthetic is a subset of Laion5B that has been estimated by a model trained on top of CLIP embeddings to be aesthetic"
What are some alternatives?
dalle-mini - DALL·E Mini - Generate images from a text prompt
clip-interrogator - Image to prompt with BLIP and CLIP
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
stable-diffusion - A latent text-to-image diffusion model
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
simulacrabot - Discord AI Generation Bot to collect an aesthetic rating dataset
disco-diffusion
glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image