score_sde
Our great sponsors
guided-diffusion | score_sde | |
---|---|---|
14 | 6 | |
5,439 | 1,242 | |
2.9% | - | |
0.0 | 0.0 | |
about 1 year ago | over 1 year ago | |
Python | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
guided-diffusion
-
Why is there speculation that midjourney is based on stable diffusion if MJ is released earlier than SD?
People who made these colabs better and better also the same people who are at Midjourney now. But the "mother" of it all, was Katherine Crowson. She made a fine tuned model that uses a 512x512 unconditional ImageNet diffusion model fine-tuned from OpenAI's 512x512 class-conditional ImageNet diffusion model (https://github.com/openai/guided-diffusion) together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images. It uses a smaller secondary diffusion model trained by Katherine Crowson to remove noise from intermediate timesteps to prepare them for CLIP.
-
Any Tips on OpenAI's Guided Diffusion?
I am trying to use OpenAI's Guided Diffusion Github to train my own diffusion model. I thought to ask here to see if anyone had any experience with it as I've been having trouble training my own models on it. If anyone has any resources to point me towards it would be greatly appreciated!
-
We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning!
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase , lucidrains, Stable Diffusion, Lightning and Hugging Face. Thanks for open-sourcing!
-
guided diffusion super resolution network training is diverging
I am working with guided diffusion. I would like to reproduce the results of the repository for the 64->256 super resolution network. https://github.com/openai/guided-diffusion
-
New custom inpainting model
this code is (mostly) just the original openai guided diffusion code: https://github.com/openai/guided-diffusion
-
Tips for Training Diffusion Model (DD) With Images and Resource Links
Starting resource, as it is all done through this code (information on how to do it on Colab is out there) https://github.com/openai/guided-diffusion
-
What was Disco trained with?
Original notebook by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). It uses either OpenAI's 256x256 unconditional ImageNet or Katherine Crowson's fine-tuned 512x512 diffusion model (https://github.com/openai/guided-diffusion), together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images.
-
[D] Diffusion Models Beat GANs on Image Synthesis Explained: 5-minute paper summary (by Casual GAN Papers)
Code for https://arxiv.org/abs/2105.05233 found: https://github.com/openai/guided-diffusion
- "Everything the AI can create" using diffusion model
-
Since this sub has a fair portion of AI-generated images, have you guys seen OpenAI's guided diffusion models yet?
Paper, repo, Colab. It's really good.
score_sde
- Ask HN: How to get back into AI?
-
[D] Variance of sampling in diffusion models
Perhaps the ODE interpretation would be helpful (see here and here) which turns DDPMs into neural ODEs using the Fokker-Planck equation so after the initial starting noise, the sampling process is deterministic. If samples are noisy even with the full number of steps then you might need to increase the number of steps further.
-
[D] Why is the diffution model so powerful? but the math behind it is so simple.
Turns out that diffusion models also define a certain differential equation, making it a neural ODE. Then you can just integrate the ODE in the other direction to get the exact inverse for the DDPM (it's not entirely exact b/c of numerical error in the solver, but close enough)
- [D] Are DDPMs a variation on Score Based Generative Modeling? Or is there a fundemental difference between the two?
-
Diffusion Models Beat GANs on Image Synthesis
This new approach to generative modelling looks very intriguing.
In a similar ilk, there's this ICLR paper from this year using stochastic differential equations for generative modelling: https://arxiv.org/abs/2011.13456
- [D] Efficient, concurrent input pipelines in JAX?
What are some alternatives?
disco-diffusion
pytorch-generative - Easy generative modeling in PyTorch.
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
SDE - Example codes for the book Applied Stochastic Differential Equations
pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
Financial-Models-Numerical-Methods - Collection of notebooks about quantitative finance, with interactive python code.
ColossalAI - Making large AI models cheaper, faster and more accessible
Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch - [ECCV 2022] Compositional Generation using Diffusion Models
denoising-diffusion-pytorch - Implementation of Denoising Diffusion Probabilistic Model in Pytorch
best-of-ml-python - 🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.
glid-3-xl-stable - stable diffusion training
score_sde_pytorch - PyTorch implementation for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)