latent-diffusion


dalle-2-preview | latent-diffusion | |
---|---|---|
61 | 70 | |
1,044 | 12,310 | |
0.0% | 1.6% | |
1.8 | 0.0 | |
over 2 years ago | 12 months ago | |
Jupyter Notebook | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dalle-2-preview
-
Microsoft-backed OpenAI to let users customize ChatGPT | Reuters
We believe that many decisions about our defaults and hard bounds should be made collectively, and while practical implementation is a challenge, we aim to include as many perspectives as possible. As a starting point, we’ve sought external input on our technology in the form of red teaming. We also recently began soliciting public input on AI in education (one particularly important context in which our technology is being deployed).
- OpenAI AI not available for Algeria, gotta love Algeria
-
The argument against the use of datasets seems ultimately insincere and pointless
From this OpenAI document:
-
Dalle-2 is > 1,000x as dollar efficient as hiring a human illustrator.
It's also of note that you can't sell a game using this method, as Dalle-2's terms of service prevent use in commercial projects. It's hard to justify rate of return considering you can only ever give it away for free, and even in that case there are some uncertain legal elements regarding copyright and the images that are used to train the dataset.
-
It's pretty obvious where dalle-2 gets some of their training data from! Anyone else had the Getty Images watermark? Prompt was "man in a suit standing in a fountain with his hair on fire."
On their GitHub https://github.com/openai/dalle-2-preview/blob/main/system-card.md I can only see references to v1.
-
“Pinterest” for Dalle-2 images and prompts
"b) Exploration of the bolded part of OpenAI's comment "Each generated image includes a signature in the lower right corner, with the goal of indicating when DALL·E 2 helped generate a certain image." (source)." (source link: https://github.com/openai/dalle-2-preview/blob/main/system-c...)
I feel the DALL-E 2 watermark signature could be a seed or something.
- I’m an outsider to digital art and have a couple questions about A.I created art.
-
The AI Art Apocalypse
DALL-E's docs for example mention it can output whole copyrighted logos and characters[1] and understands it's possible to generate human faces that are bear the likeness of those in the training data. We've also seen people recently critique Stable Diffusion's output for attempting to recreate artists' signatures that came from the commercial trained data.
That said by a certain point the kinks will be ironed out and likely skirt around such issues by only incorporating/manipulating just enough to be considered fair use and creative transformation.
[1] "The model can generate known entities including trademarked logos and copyrighted characters." https://github.com/openai/dalle-2-preview/blob/main/system-c...
- Trabalhei no projeto Dall-e, me pergunte qualquer coisa (AMA)
-
Official Dalle server: Why “furry art” is a banned phrase
Some types of content were purposely excluded from the training dataset(s) (source).
latent-diffusion
-
SDXL: The next generation of Stable Diffusion models for text-to-image synthesis
Stable Diffusion XL (SDXL) is the latest text-to-image generation model developed by Stability AI, based on the latent diffusion techniques. SDXL has the potential to create highly realistic images for media, entertainment, education, and industry domains, opening new ways in practical uses of AI imagery.
-
Is it possible to create a checkpoint from scratch?
Here's a link to the early latent-diffusion git, that might be able to create a blank model (I haven't tested it): https://github.com/CompVis/latent-diffusion
-
Anything better than pix2pixHD?
Latent diffusion could work for you: https://github.com/CompVis/latent-diffusion (https://arxiv.org/abs/2112.10752)
-
Image Upscaler AI
There are a lot but the one implemented as LDSR in most stable guis is this one. https://github.com/CompVis/latent-diffusion
-
I've been collecting millions of images of only public domain /cc0 licensing. I'd like to train a stable diffusion model on the collection. Could some one share their knowledge of what this would take? Otherwise, simply enjoy my library.
CompVis/latent-diffusion: High-Resolution Image Synthesis with Latent Diffusion Models (github.com)
-
Run Clip on iPhone to Search Photos
The "retrieval based model" refers to https://github.com/CompVis/latent-diffusion#retrieval-augmen..., which uses ScaNN to train a knn embedding searcher.
-
Class Action Lawsuit filed against Stable Diffusion and Midjourney.
Stability is basically https://github.com/CompVis/latent-diffusion + training data.
-
[D] Influential papers round-up 2022. What are your favorites?
Found relevant code at https://github.com/CompVis/latent-diffusion + all code implementations here
-
Can anyone explain differences between sampling methods and their uses to me in simple terms, because all the info I've found so far is either very contradicting or complex and goes over my head
DDIM and PLMS were the original samplers. They were part of Latent Diffusion's repository. They stand for the papers that introduced them, Denoising Diffusion Implicit Models and Pseudo Numerical Methods for Diffusion Models on Manifolds.
-
AI art is very dystopian.
yes, https://github.com/CompVis/latent-diffusion
What are some alternatives?
dalle-mini - DALL·E Mini - Generate images from a text prompt
hent-AI - Automation of censor bar detection
glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
disco-diffusion
clip-interrogator - Image to prompt with BLIP and CLIP
stable-diffusion - A latent text-to-image diffusion model
stable-diffusion

