denoising-diffusion-pytorch
Awesome-Diffusion-Models
Our great sponsors
denoising-diffusion-pytorch | Awesome-Diffusion-Models | |
---|---|---|
11 | 6 | |
6,994 | 10,030 | |
- | - | |
8.6 | 6.1 | |
15 days ago | about 2 months ago | |
Python | HTML | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
denoising-diffusion-pytorch
- Commits · lucidrains/denoising-diffusion-pytorch
-
Help using torchaudio and spectrograms for diffusion
I’m trying to train a diffusion model using this code (https://github.com/lucidrains/denoising-diffusion-pytorch). My idea is to take a short audio segment, transform it into a spectrogram and train the model on these images then have it generate spectrograms then go back to audio. However the model requires square images. I cannot for the life of me figure out how to make a square spectrogram. Also is a regular spectrogram or a mel spectrogram better for this application?
-
Implementation of Google's MusicLM in PyTorch
Generally it's without weights, but MusicLM is also a WIP more mature implementations have descriptions on how to train them and follow ups on small scale/crowd-sourced experiments & research[1].
[1]: https://github.com/lucidrains/denoising-diffusion-pytorch
-
[D] Time Embedding in Diffusion Model
[1] https://colab.research.google.com/drive/1sjy9odlSSy0RBVgMTgP7s99NXsqglsUL?usp=sharing#scrollTo=KOYPSxPf_LL7 [2] https://github.com/lucidrains/denoising-diffusion-pytorch/blob/main/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-
[D] Can a Diffusion Model be trained with an NVIDIA TITAN X?
Sure. I am using: https://github.com/lucidrains/denoising-diffusion-pytorch
-
[D] Resources to learn and fully understand Diffusion Model Codes
Lucidrains GitHub is always my go to repo for understandable paper implementations https://github.com/lucidrains/denoising-diffusion-pytorch
-
Diffusion model generated exactly the same image as the training image
Thanks for the reply. Is there any suggestion if I wanted to train a model to generate half cat and half butterfly images what I should do? I git cloned the code from https://github.com/lucidrains/denoising-diffusion-pytorch and trained from scratch.
-
[D] Best diffusion model archetype to train?
DDIM/DDPM are the same model to train, they only differ at inference time. To start I would recommend building from lucidrains' MIT licenced version (https://github.com/lucidrains/denoising-diffusion-pytorch). Just play around with the models until you gain an intuition.
-
We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning!
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase , lucidrains, Stable Diffusion, Lightning and Hugging Face. Thanks for open-sourcing!
-
[D] Introduction to Diffusion Models
Once you understand these papers you can begin to understand Palette, and from there I would start with an open-source diffusion implementation like this one and then modify it to suit your needs!
Awesome-Diffusion-Models
-
Ask HN: How do you catch up to the research of LLMs/Transformers etc.?
3. https://github.com/diff-usion/Awesome-Diffusion-Models
-
Awesome-Diffusion-Models
GitHub repository that contains a collection of resources and papers on Diffusion Models: https://github.com/heejkoo/Awesome-Diffusion-Models
- GitHub - heejkoo/Awesome-Diffusion-Models: A collection of resources and papers on Diffusion Models and Score-matching Models, a darkhorse in the field of Generative Models
-
Wiskkey's lists of text-to-image systems and related resources
(Added Apr. 10, 2022) Awesome Diffusion Models.
-
[D] Introduction to Diffusion Models
I certainly should have looked for a lucidrains implementation, but I also found https://github.com/heejkoo/Awesome-Diffusion-Models which had some helpful links too.
What are some alternatives?
ALAE - [CVPR2020] Adversarial Latent Autoencoders
Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch - [ECCV 2022] Compositional Generation using Diffusion Models
autoregressive - :kiwi_fruit: Autoregressive Models in PyTorch.
ML-University - Machine Learning Open Source University
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
dalle-mini - DALL·E Mini - Generate images from a text prompt
RAVE - Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder
papers-I-read - A-Paper-A-Week
pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
MachineLearning-BaseballPrediction-BlazorApp - Machine Learning over historical baseball data using latest Microsoft AI & Development technology stack (.Net Core & Blazor)
molecule-generation - Implementation of MoLeR: a generative model of molecular graphs which supports scaffold-constrained generation
CogView2 - official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"