Similar projects and alternatives to guided-diffusion
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better guided-diffusion alternative or higher similarity.
guided-diffusion reviews and mentions
Posts with mentions or reviews of guided-diffusion. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-12-27.
[D] Diffusion Models Beat GANs on Image Synthesis Explained: 5-minute paper summary (by Casual GAN Papers)
2 projects | reddit.com/r/MachineLearning | 27 Dec 2021
arxiv / code2 projects | reddit.com/r/MachineLearning | 27 Dec 2021
Code for https://arxiv.org/abs/2105.05233 found: https://github.com/openai/guided-diffusion
"Everything the AI can create" using diffusion model
1 project | reddit.com/r/bigsleep | 1 Dec 2021
Since this sub has a fair portion of AI-generated images, have you guys seen OpenAI's guided diffusion models yet?
1 project | reddit.com/r/NovelAi | 27 Jul 2021
Paper, repo, Colab. It's really good.
[D] Paper Explained - DDPMs: Diffusion Models Beat GANs on Image Synthesis (Full Video Analysis)
1 project | reddit.com/r/MachineLearning | 15 May 2021
Diffusion Models Beat GANs on Image Synthesis
2 projects | news.ycombinator.com | 11 May 2021
Although the weights aren't available, I wanted to note that the model source itself is actually available at https://github.com/openai/guided-diffusion.
[R] Diffusion Models Beat GANs on Image Synthesis
1 project | reddit.com/r/MachineLearning | 11 May 2021
Abstract: We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for sample quality using gradients from a classifier. We achieve an FID of 2.97 on ImageNet $128 \times 128$, 4.59 on ImageNet $256 \times 256$, and $7.72$ on ImageNet $512 \times 512$, and we match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3.85 on ImageNet $512 \times 512$. We release our code at this https URL
Basic guided-diffusion repo stats
about 1 month ago
openai/guided-diffusion is an open source project licensed under MIT License which is an OSI approved license.