stable-diffusion-videos
sd-dynamic-thresholding
stable-diffusion-videos | sd-dynamic-thresholding | |
---|---|---|
17 | 26 | |
4,234 | 1,019 | |
- | 4.8% | |
2.0 | 7.2 | |
about 1 year ago | 21 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-videos
- How to create it?
-
Stable Diffusion Text-to-Video WebUI
Main Code: https://github.com/nateraw/stable-diffusion-videos/
-
Messing with the denoising loop can allow you to reach new places in latent space. Over 8+ different research papers/Auto1111 extension ideas in a single pipe. Load once and do lots of different things (SD 2.1 or 1.5)
So I've continued to experiment with how many papers I can fit into a single pipe and have them play nicely together. The images below were created by combining the panorama code from omerbt/MultiDiffusion with the ideas from albarji/mixture-of-diffusers. Also turns out nateraw/stable-diffusion-videos can be seen as a special case of a panorama (in latent space rather than prompt space).
-
Comparison of new UniPC sampler method added to Automatic1111
https://huggingface.co/spaces/tomg-group-umd/pez-dispenser https://huggingface.co/spaces/AIML-TUDA/safe-stable-diffusion https://huggingface.co/spaces/AIML-TUDA/semantic-diffusion https://github.com/nateraw/stable-diffusion-videos
-
Start Frame -> Stable Diffusion + Linear Interpolation -> End Frame
The goal is to make a (short) video out of a given first and last frame. It is similar to what this guy does (https://github.com/nateraw/stable-diffusion-videos (7sec example video half way down page)). But instead of starting and ending with a prompt, I want to start and end with 2 different frames.
-
Stable Diffusion Videos Easy-to-Use Playground & Competition This Week
Hey Y'all! We've been working on a tool that extends Nate Raw's Stable Diffusion Videos repo and makes it as easy as possible to use for artists and are having a competition this week to stress test the beta and see who can use it to make the most compelling short video (40 seconds max)
- Create videos with Stablediffusion. Saw this project and thought someone here might like it.
-
Tried to pull off an ultra smooth video where you don't realize the scenes are changing until after-the-fact so I could make an 8hr background video that won't give seizures
Of course! There might be a better process but mainly used: 1.) Nate Raw's repo for morphing between prompts https://github.com/nateraw/stable-diffusion-videos 2.) Google FILM interpolation to smooth out transitions https://github.com/google-research/frame-interpolation
-
[video] Packed underground rave in North Korea with dj ill kim headlining
There are directions in the readme and an example script.
-
Short interpolation animation between several frames?
This does exactly that - https://github.com/nateraw/stable-diffusion-videos
sd-dynamic-thresholding
-
ZeroDiffusion -- a clean zero terminal SNR training 1.5 base model + experimental inpainting model
For outputs to look right, you will need some form of CFG rescale or dynamic thresholding in order to correct for overexposure (A1111 extensions are linked -- I am told that ComfyUI has nodes available for these functions). A good starting point for CFG rescale is 0.7, as recommended in the paper. I strongly suspect that CFG rescale is not an ideal solution and leaves a substantial training-inference gap, and when using zero terminal SNR models I find that Dynamic Thresholding can give better outputs that are closer to what I expect from the data without the brownout often caused by CFG rescale. A potential starting point for Dynamic Thresholding would be: Restart sampler, 15 CFG scale, Mimic CFG scale 15 7.5, Sawtooth on both scale schedulers, 6 for both minimum values, scheduler value 4, do not separate feature channels, ZERO, STD. You will likely have to experiment a lot with Dynamic Thresholding. (edit: small correction to DT settings)
-
Dynamic Thresholding for comfyui?
Recently switched from A1111 and i love it so far, flexibility to orchestrate complex workflows automatically instead of manual operations is a life changer. Anyhow, one extension i like on A1111 was this one: https://github.com/mcmonkeyprojects/sd-dynamic-thresholding
-
How do I implement Dynamic Thresholding (CFG scale fix) in ComfyUI?
In the Automatic1111 webui, there is a Dynamic Thresholding (CFG scale fix) extension that:
-
How to diffuse better faces?
Ive found using ADetailer (https://github.com/Bing-su/adetailer, using their reccomended advanced settings and face_yolov8n.pt) and Dynamic Thresholding (CFG set to 12 and Mimic to 7) has vastly improved my face renders. (https://github.com/mcmonkeyprojects/sd-dynamic-thresholding) GL!
-
Kohya UI settings as asked (style+character training)
The output LoRA works best with CFG at 4, because at 7 it gets that gasoline colors and contrast of overbaking, but I guess this is a tradeoff of that many steps in total (5200) since the earlier snapshots were not that good in style and with character details. You can use a workaround like the Dynamic Trescholding extention: https://github.com/mcmonkeyprojects/sd-dynamic-thresholding.git - helps a lot in many cases when you want a high CFG but the model/lora overbakes them (it mimics a lower CFG while keeping the high CFG details and prompt alignment).
-
Does anyone know how to create this type of hyper realistic pic?
Use sd-dynamic-thresholding extension (set CFG scale to 12 or more and mimic CFG scale to 7): https://github.com/mcmonkeyprojects/sd-dynamic-thresholding
- ControlNet Reference-Only problems
-
What's your favorite small tweaks to make? I'll go first
Tweak this up or down for small changes. Too far and you’ll get a different image. Extensions like Dynamic Thresholding can let you go much higher without the overexposed look.
-
Blurred/Low quality/Low details images
Turn CFG scale down or maybe use this extension, I've never used Dynamic Thresholding before but I think its what you want
- Dynamic threshold & Offset noise - The answer to oversaturated images?
What are some alternatives?
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
stable-diffusion-webui-anti-burn - Extension for AUTOMATIC1111/stable-diffusion-webui for smoothing generated images by skipping a few very last steps and averaging together some images before them.
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
Stable-Diffusion - Stable Diffusion, SDXL, LoRA Training, DreamBooth Training, Automatic1111 Web UI, DeepFake, Deep Fakes, TTS, Animation, Text To Video, Tutorials, Guides, Lectures, Courses, ComfyUI, Google Colab, RunPod, NoteBooks, ControlNet, TTS, Voice Cloning, AI, AI News, ML, ML News, News, Tech, Tech News, Kohya LoRA, Kandinsky 2, DeepFloyd IF, Midjourney
dain-ncnn-vulkan - DAIN, Depth-Aware Video Frame Interpolation implemented with ncnn library
adetailer - Auto detecting, masking and inpainting with detection model.
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
stable-karlo - Upscaling Karlo text-to-image generation using Stable Diffusion v2.
sd_webui_SAG
stable-diffusion-tensorflow-IntelMetal - Stable Diffusion in TensorFlow / Keras, Designed for Apple Metal on Intel. Forked from @divamgupta's work [Moved to: https://github.com/soten355/MetalDiffusion]