collage-diffusion-ui
consistency-models
collage-diffusion-ui | consistency-models | |
---|---|---|
3 | 6 | |
50 | 192 | |
- | - | |
4.7 | 5.6 | |
8 months ago | about 1 year ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
collage-diffusion-ui
-
Collage Diffusion: use a familiar Photoshop-like layered interface to control diffusion models! (free demo)
The UI/implementation is open source at: https://github.com/linden-li/collage-diffusion-ui Free demo at: https://collagediffusion.stanford.edu/
-
Collage Diffusion: a Photoshop-like layered interface for Stable Diffusion
Code at: https://github.com/linden-li/collage-diffusion-ui
consistency-models
-
AI is getting scary
Three: This one technically came out early march, but we didn't hear about it till the 12th. [2303.01469] Consistency Models (arxiv.org)
- Introducing Consistency: OpenAI has released the code for its new one-shot image generation technique. Unlike Diffusion, which requires multiple steps of Gaussian noise removal, this method can produce realistic images in a single step. This enables real-time AI image creation from natural language
- Goodbye Diffusion. Hello Consistency. The code for OpenAIs new approach to AI image generation is now available. This one-shot approach, as opposed to the multi-step Gaussian perturbation method of Diffusion, opens the door to real-time AI image generation.
- Consistency Models
-
OpenAI releases Consistency Model for one-step generation
tl;dr, a faster alternative to diffusion models for image and A/V generation.
Abstract of the paper:
> Diffusion models have made significant breakthroughs in image, audio, and video generation, but they depend on an iterative generation process that causes slow sampling speed and caps their potential for real-time applications. To overcome this limitation, we propose consistency models, a new family of generative models that achieve high sample quality without adversarial training. They support fast one-step generation by design, while still allowing for few-step sampling to trade compute for sample quality. They also support zero-shot data editing, like image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either as a way to distill pre-trained diffusion models, or as standalone generative models. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step generation. For example, we achieve the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained as standalone generative models, consistency models also outperform single-step, non-adversarial generative models on standard benchmarks like CIFAR-10, ImageNet 64x64 and LSUN 256x256.
https://arxiv.org/abs/2303.01469
- [P] Consistency: Diffusion in a Single Forward Pass 🚀
What are some alternatives?
IF-webui - DeepFloyd IF web UI
stable_diffusion_playground - Playing around with stable diffusion. Generated images are reproducible because I save the metadata and latent information. You can generate and then later interpolate between the images of your choice.
LAMP - Official implement code of LAMP: Learn a Motion Pattern by Few-Shot Tuning a Text-to-Image Diffusion Model (Few-shot-based text-to-video diffusion)
consistency_models - Official repo for consistency models.
asset-generator - A powerful application to generate AI-generated Assets using DALL-E,Stable Diffustion and DeepAI.
Ckpt2Diff - This user-friendly wizard is used to convert a Stable Diffusion Model from CKPT format to Diffusers format.
riffusion-inference - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-inference]
caption-upsampling - This repository implements the idea of "caption upsampling" from DALL-E 3 with Zephyr-7B and gathers results with SDXL.
riffusion-inference - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion]
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
zero123plus - Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.