laion-datasets
simulacrabot
laion-datasets | simulacrabot | |
---|---|---|
6 | 2 | |
213 | 61 | |
7.0% | - | |
0.0 | 0.0 | |
over 1 year ago | over 1 year ago | |
HTML | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
laion-datasets
-
Valve is reportedly banning games featuring AI generated content
Not true, it uses the MIT license, which allows for any use including commercial. According to the license you could even sell the Laion datasets yourself if you wanted.
- I don't understand why people are so adamant that nobody have fun. Literally nobody is being harmed by screwing around with AI art programs for personal amusement.
-
The AI Art Apocalypse
Datasets can be manually curated to produce more aesthetic results if this becomes a real issue. For example, classifiers can predict whether an image is generated or not. You could adapt the process used to create laion-aesthetic[0] to remove generated images.
[0]: https://github.com/LAION-AI/laion-datasets/blob/main/laion-a...
-
The current model was trained on LAION 2B, a 100 TB dataset containing 2 billion images. If we train on LAION 5B which contains 5 billion images will the quality and prompt understanding go up a lot?
source: https://github.com/LAION-AI/laion-datasets/blob/main/laion-aesthetic.md
-
Open-source rival for OpenAI’s DALL-E runs on your graphics card
My hunch is that is the result of this: https://github.com/CompVis/stable-diffusion#weights
> 515k steps at resolution 512x512 on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size >= 512x512, estimated aesthetics score > 5.0
https://github.com/LAION-AI/laion-datasets/blob/main/laion-a... for more details.
What's remarkable is this: https://github.com/LAION-AI/laion-datasets/blob/main/laion-a...
That aesthetic predictor was apparently trained on only 4000 images. If my thinking is correct, imagine the impact those 4000 ratings have had on all of the output of this model.
You can see samples (some NSFW) of different images from the original training set in different rating buckets here, to get an idea of what was included or not in those training steps. http://3080.rom1504.fr/aesthetic/aesthetic_viz.html
- "Laion-aesthetic is a subset of Laion5B that has been estimated by a model trained on top of CLIP embeddings to be aesthetic"
simulacrabot
-
The current model was trained on LAION 2B, a 100 TB dataset containing 2 billion images. If we train on LAION 5B which contains 5 billion images will the quality and prompt understanding go up a lot?
One of the LAION contributors gathered 4k images and 0-10 ratings for image appearance (but the ratings and images all seem to be from this AI generator model?).
-
r/SimulacraBot Lounge
what this https://github.com/JD-P/simulacrabot ?
What are some alternatives?
clip-interrogator - Image to prompt with BLIP and CLIP
dalle-2-preview
stable-diffusion - A latent text-to-image diffusion model
dalle-mini - DALL·E Mini - Generate images from a text prompt