Ckpt2Diff
consistency-models
Ckpt2Diff | consistency-models | |
---|---|---|
1 | 6 | |
13 | 204 | |
- | 2.0% | |
10.0 | 5.6 | |
over 2 years ago | about 2 years ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Ckpt2Diff
-
Ckpt2Diff - An user-friendly wizard to convert a StableDiffusion model from CKPT format to Diffusers format
The source code is available at https://github.com/Sunbread/Ckpt2Diff
consistency-models
-
AI is getting scary
Three: This one technically came out early march, but we didn't hear about it till the 12th. [2303.01469] Consistency Models (arxiv.org)
- Introducing Consistency: OpenAI has released the code for its new one-shot image generation technique. Unlike Diffusion, which requires multiple steps of Gaussian noise removal, this method can produce realistic images in a single step. This enables real-time AI image creation from natural language
- Goodbye Diffusion. Hello Consistency. The code for OpenAIs new approach to AI image generation is now available. This one-shot approach, as opposed to the multi-step Gaussian perturbation method of Diffusion, opens the door to real-time AI image generation.
- Consistency Models
-
OpenAI releases Consistency Model for one-step generation
tl;dr, a faster alternative to diffusion models for image and A/V generation.
Abstract of the paper:
> Diffusion models have made significant breakthroughs in image, audio, and video generation, but they depend on an iterative generation process that causes slow sampling speed and caps their potential for real-time applications. To overcome this limitation, we propose consistency models, a new family of generative models that achieve high sample quality without adversarial training. They support fast one-step generation by design, while still allowing for few-step sampling to trade compute for sample quality. They also support zero-shot data editing, like image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either as a way to distill pre-trained diffusion models, or as standalone generative models. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step generation. For example, we achieve the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained as standalone generative models, consistency models also outperform single-step, non-adversarial generative models on standard benchmarks like CIFAR-10, ImageNet 64x64 and LSUN 256x256.
https://arxiv.org/abs/2303.01469
- [P] Consistency: Diffusion in a Single Forward Pass 🚀
What are some alternatives?
novelai-api - Python API for the NovelAI REST API
diffusion-expert - A software for drawing with stable-diffusion support
AI-image-tag-extractor - A tool to help you get image info.
collage-diffusion-ui - An open source, layer-based web interface for Collage Diffusion - use a familiar Photoshop-like interface and let the AI harmonize the details.
diffusionmagic - Easy to use Stable diffusion workflows using diffusers
caption-upsampling - This repository implements the idea of "caption upsampling" from DALL-E 3 with Zephyr-7B and gathers results with SDXL.