onediff
consistency-models
onediff | consistency-models | |
---|---|---|
2 | 6 | |
1,340 | 192 | |
9.4% | - | |
9.7 | 5.6 | |
5 days ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
onediff
-
Accelerating Stable Video Diffusion 3x Faster with OneDiff DeepCache and Int8
--output-video path/to/output_image.mp4
Run with ComfyUI
Run with OneDiff workflow: https://github.com/siliconflow/onediff/blob/main/onediff_com...
Run with OneDiff + DeepCache workflow: https://github.com/siliconflow/onediff/blob/main/onediff_com...
The use of Int8 can be referenced in the workflow: https://github.com/siliconflow/onediff/blob/main/onediff_com...
-
Text-to-Image inference engine OneDiff 0.12 released
OneDiff 0.12 released!
switch image size will not trigger re-compilation(i.e. no time cost);
faster to save and load a graph;
smaller static memory needed.
Github: https://github.com/siliconflow/onediff
How to use: https://github.com/siliconflow/onediff/releases/tag/0.12.0
Additionally, OneDiff ComfyUI nodes are in ComfyUI-Manager Now!
consistency-models
-
AI is getting scary
Three: This one technically came out early march, but we didn't hear about it till the 12th. [2303.01469] Consistency Models (arxiv.org)
- Introducing Consistency: OpenAI has released the code for its new one-shot image generation technique. Unlike Diffusion, which requires multiple steps of Gaussian noise removal, this method can produce realistic images in a single step. This enables real-time AI image creation from natural language
- Goodbye Diffusion. Hello Consistency. The code for OpenAIs new approach to AI image generation is now available. This one-shot approach, as opposed to the multi-step Gaussian perturbation method of Diffusion, opens the door to real-time AI image generation.
- Consistency Models
-
OpenAI releases Consistency Model for one-step generation
tl;dr, a faster alternative to diffusion models for image and A/V generation.
Abstract of the paper:
> Diffusion models have made significant breakthroughs in image, audio, and video generation, but they depend on an iterative generation process that causes slow sampling speed and caps their potential for real-time applications. To overcome this limitation, we propose consistency models, a new family of generative models that achieve high sample quality without adversarial training. They support fast one-step generation by design, while still allowing for few-step sampling to trade compute for sample quality. They also support zero-shot data editing, like image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either as a way to distill pre-trained diffusion models, or as standalone generative models. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step generation. For example, we achieve the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained as standalone generative models, consistency models also outperform single-step, non-adversarial generative models on standard benchmarks like CIFAR-10, ImageNet 64x64 and LSUN 256x256.
https://arxiv.org/abs/2303.01469
- [P] Consistency: Diffusion in a Single Forward Pass 🚀
What are some alternatives?
comfyui-browser - An image/video/workflow browser and manager for ComfyUI.
stable_diffusion_playground - Playing around with stable diffusion. Generated images are reproducible because I save the metadata and latent information. You can generate and then later interpolate between the images of your choice.
oneflow - OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
consistency_models - Official repo for consistency models.
diffusion-fast - Faster generation with text-to-image diffusion models.
collage-diffusion-ui - An open source, layer-based web interface for Collage Diffusion - use a familiar Photoshop-like interface and let the AI harmonize the details.
Ckpt2Diff - This user-friendly wizard is used to convert a Stable Diffusion Model from CKPT format to Diffusers format.
caption-upsampling - This repository implements the idea of "caption upsampling" from DALL-E 3 with Zephyr-7B and gathers results with SDXL.
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
zero123plus - Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
diffusion-expert - A software for drawing with stable-diffusion support
JARVIS - JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf