Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work. Learn more →
Guided-diffusion Alternatives
Similar projects and alternatives to guided-diffusion
-
CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
-
Sonar
Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
-
score_sde
Official code for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)
-
lightning
Deep learning framework to train, deploy, and ship AI products Lightning fast.
-
ColossalAI
Making large AI models cheaper, faster and more accessible
-
-
denoising-diffusion-pytorch
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
-
InfluxDB
Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.
-
-
guided-diffusion reviews and mentions
-
Why is there speculation that midjourney is based on stable diffusion if MJ is released earlier than SD?
People who made these colabs better and better also the same people who are at Midjourney now. But the "mother" of it all, was Katherine Crowson. She made a fine tuned model that uses a 512x512 unconditional ImageNet diffusion model fine-tuned from OpenAI's 512x512 class-conditional ImageNet diffusion model (https://github.com/openai/guided-diffusion) together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images. It uses a smaller secondary diffusion model trained by Katherine Crowson to remove noise from intermediate timesteps to prepare them for CLIP.
-
We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning!
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase , lucidrains, Stable Diffusion, Lightning and Hugging Face. Thanks for open-sourcing!
-
New custom inpainting model
this code is (mostly) just the original openai guided diffusion code: https://github.com/openai/guided-diffusion
-
What was Disco trained with?
Original notebook by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). It uses either OpenAI's 256x256 unconditional ImageNet or Katherine Crowson's fine-tuned 512x512 diffusion model (https://github.com/openai/guided-diffusion), together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images.
-
[D] Diffusion Models Beat GANs on Image Synthesis Explained: 5-minute paper summary (by Casual GAN Papers)
arxiv / code
Code for https://arxiv.org/abs/2105.05233 found: https://github.com/openai/guided-diffusion
-
Diffusion Models Beat GANs on Image Synthesis
Although the weights aren't available, I wanted to note that the model source itself is actually available at https://github.com/openai/guided-diffusion.
-
A note from our sponsor - Sonar
www.sonarsource.com | 2 Oct 2023
Stats
openai/guided-diffusion is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of guided-diffusion is Python.