DiffusionFastForward
score_sde_pytorch
DiffusionFastForward | score_sde_pytorch | |
---|---|---|
3 | 4 | |
506 | 1,401 | |
- | - | |
4.4 | 0.0 | |
11 months ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DiffusionFastForward
-
Using Stable Diffusion's training method for Reverse engineering?
I can recommend this open-source course I made for understanding the details of denoising diffusion for images https://github.com/mikonvergence/DiffusionFastForward
- [R] Training Small Diffusion Model
- [P] A minimal framework for image diffusion (including high-resolution)
score_sde_pytorch
-
[D] score based vs. Diffusion models
there's an implementation of score-based models from the paper that showed how score based models and diffusion models are the same here: https://github.com/yang-song/score_sde_pytorch
-
Machine learning and black box numerical solver[D]
Someone has already mentioned Neural Ordinary Differential Equations, which is also the first thing that came to mind. There are also extensions to it, where one can use PDEs(Neural Hamiltonian Flows) or even stochastic DEs(Score-Based Generative Models) in the model. All of them covering different but overlapping use cases.
-
[Discussion] Could someone explain the math behind the number of distinct images that can be generated with a latent diffusion model?
I was considering an unconditional latent diffusion model, but for conditional models, the computation becomes much more complex (we might have to use bayes here). If we use Score-Based Generative Modeling (https://arxiv.org/abs/2011.13456), we could try to find and count all the unique local minima and saddle points, but it is not clear how we can do this...
-
[D] Machine Learning - WAYR (What Are You Reading) - Week 138
You can find an implementation here: https://github.com/yang-song/score_sde_pytorch/blob/main/models/ddpm.py
What are some alternatives?
dino-diffusion - Bare-bones diffusion model code
dpm-solver - Official code for "DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps" (Neurips 2022 Oral)
Self-Attention-Guidance - Official implementation of the paper "Improving Sample Quality of Diffusion Models Using Self-Attention Guidance" (ICCV 2023)
Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch - [ECCV 2022] Compositional Generation using Diffusion Models
blended-latent-diffusion - Official implementation for "Blended Latent Diffusion" [SIGGRAPH 2023]
diffusion_models - Minimal standalone example of diffusion model
score_sde - Official code for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)
seed-alchemy - Frontend UI and Backend Server for Stable Diffusion models
Magic123 - [ICLR24] Official PyTorch Implementation of Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors
audio-diffusion - Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.