stable-diffusion-videos
sd_lite
stable-diffusion-videos | sd_lite | |
---|---|---|
17 | 15 | |
4,234 | 18 | |
- | - | |
2.0 | 4.5 | |
about 1 year ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-videos
- How to create it?
-
Stable Diffusion Text-to-Video WebUI
Main Code: https://github.com/nateraw/stable-diffusion-videos/
-
Messing with the denoising loop can allow you to reach new places in latent space. Over 8+ different research papers/Auto1111 extension ideas in a single pipe. Load once and do lots of different things (SD 2.1 or 1.5)
So I've continued to experiment with how many papers I can fit into a single pipe and have them play nicely together. The images below were created by combining the panorama code from omerbt/MultiDiffusion with the ideas from albarji/mixture-of-diffusers. Also turns out nateraw/stable-diffusion-videos can be seen as a special case of a panorama (in latent space rather than prompt space).
-
Comparison of new UniPC sampler method added to Automatic1111
https://huggingface.co/spaces/tomg-group-umd/pez-dispenser https://huggingface.co/spaces/AIML-TUDA/safe-stable-diffusion https://huggingface.co/spaces/AIML-TUDA/semantic-diffusion https://github.com/nateraw/stable-diffusion-videos
-
Start Frame -> Stable Diffusion + Linear Interpolation -> End Frame
The goal is to make a (short) video out of a given first and last frame. It is similar to what this guy does (https://github.com/nateraw/stable-diffusion-videos (7sec example video half way down page)). But instead of starting and ending with a prompt, I want to start and end with 2 different frames.
-
Stable Diffusion Videos Easy-to-Use Playground & Competition This Week
Hey Y'all! We've been working on a tool that extends Nate Raw's Stable Diffusion Videos repo and makes it as easy as possible to use for artists and are having a competition this week to stress test the beta and see who can use it to make the most compelling short video (40 seconds max)
- Create videos with Stablediffusion. Saw this project and thought someone here might like it.
-
Tried to pull off an ultra smooth video where you don't realize the scenes are changing until after-the-fact so I could make an 8hr background video that won't give seizures
Of course! There might be a better process but mainly used: 1.) Nate Raw's repo for morphing between prompts https://github.com/nateraw/stable-diffusion-videos 2.) Google FILM interpolation to smooth out transitions https://github.com/google-research/frame-interpolation
-
[video] Packed underground rave in North Korea with dj ill kim headlining
There are directions in the readme and an example script.
-
Short interpolation animation between several frames?
This does exactly that - https://github.com/nateraw/stable-diffusion-videos
sd_lite
- List of Stable Diffusion research softwares that I don't think gotten widespread adoption.
- Comparing 5 recent SD distillation methods SSD/LCM/Turbo to find the best option for low-VRAM users (images and statistical analysis included). SD-Turbo scores significantly higher on aesthetics, the boost to SD-21 is remarkable
-
Latent Jitter: a simple method for generating variations on a prompt to composite into a final image. Stacks well with prompt delay and The Stable Artist to give you 4+ options from a single seed/prompt.
The full details of how to do this are available on Github: latent jitter · thekitchenscientist/sd_lite but I will explain the idea briefly here. I have read this could be done with perlin or simplex noise but the code was too complex for my taste. This gets the job done with only minor modifications to the standard pipe.
-
"SEGA: Instructing Diffusion using Semantic Dimensions": Paper + GitHub repo + web app + Colab notebook for generating images that are variations of a base image generation by specifying secondary text prompt(s). In this example, the secondary text prompt was "smiling". See comment for details.
I did successfully swap the effiel tower for the burj Khalifa but that required additional steps https://github.com/thekitchenscientist/sd_lite/wiki/latent-jitter
-
Are there any sure-fire 100% SFW models for Stable Diffusion? Project for kids
I use it in my pipe as a general image beautifier. https://github.com/thekitchenscientist/sd_lite/wiki/safe-latent-diffusion
-
Messing with the denoising loop can allow you to reach new places in latent space. Over 8+ different research papers/Auto1111 extension ideas in a single pipe. Load once and do lots of different things (SD 2.1 or 1.5)
The pipe is available at sd_lite/pipeline_stable_diffusion_multi.py (github.com) it combines:
-
Comparison of new UniPC sampler method added to Automatic1111
This community has published many XY plots of CFG versus steps. https://github.com/thekitchenscientist/sd_lite/wiki/recommended The consistent theme is low CFG, lower steps; high CFG, more steps. UniPC can reach convergence in as few as 8 steps, so I increased by 1/3 to account for more complex prompts needing longer
-
Create Panorama images of ANY size using less then 6GB VRAM, also x6-10 speed-up and added support for batch mode! A modification of MultiDiffusion. Potato computers of the world rejoice. SD2.0 768 model gives fastest creation of larger sizes but the VAE image slicing means no VRAM spike.
the pipeline is available from github.com and is called in the usual way. The Technique requires the DDIM scheduler.
-
Img2Img as a side-scrolling enhancer - more pictures in the comments
https://github.com/thekitchenscientist/sd_lite is where the code is. Version 1 of the multi-pipe is limited to images 512 high or wide but any number on the other dimension
-
You to can create Panorama images 512x10240+ (not a typo) using less then 6GB VRAM (Vertorama works too). A modification of the MultiDiffusion code to pass the image through the VAE in slices then reassemble. Potato computers of the world rejoice.
Not to be deterred I hacked together some code to blend it all back together after the VAE but before the final colour balance. The pipe code is available on github. thekitchenscientist/sd_lite
What are some alternatives?
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
sd-dynamic-thresholding - Dynamic Thresholding (CFG Scale Fix) for Stable Diffusion (StableSwarmUI, ComfyUI, and Auto WebUI)
dain-ncnn-vulkan - DAIN, Depth-Aware Video Frame Interpolation implemented with ncnn library
safe-latent-diffusion - Official Implementation of Safe Latent Diffusion for Text2Image
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
ziplora-pytorch - Implementation of "ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs"
stable-karlo - Upscaling Karlo text-to-image generation using Stable Diffusion v2.
erasing - Erasing Concepts from Diffusion Models
stable-diffusion-tensorflow-IntelMetal - Stable Diffusion in TensorFlow / Keras, Designed for Apple Metal on Intel. Forked from @divamgupta's work [Moved to: https://github.com/soten355/MetalDiffusion]
Concurrent-gif2gif - Experimental Automatic1111 Stable Diffusion WebUI extension, concurrent frame rendering