stable-diffusion-loopback-color-correction-script
sd-parseq
stable-diffusion-loopback-color-correction-script | sd-parseq | |
---|---|---|
6 | 8 | |
28 | 338 | |
- | - | |
10.0 | 8.9 | |
over 1 year ago | 7 months ago | |
Python | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-loopback-color-correction-script
-
Loopback makes pictures lose contrast a lot
Hi, I have a problem with loopback that I struggle with for few days and I fail to fix it. When I use loopback, even just 5 steps, the pictures have progressively lower contrast until they are just pure gray. I have tried turning on/off the color correction in settings, I have tried this https://github.com/rewbs/stable-diffusion-loopback-color-correction-script script for color correction, I have tried using inpainting instead but nothing seems to help. Do you have any idea what may cause this, how to fix this? The weird thing is that when it generates, when I see the how it generated it looks fine but at the moment it saves it just loses contrast and changes. I am using the v4.5 model and DPM++ SDE sampling.
-
Black Muddy River by the Grateful Dead (a stable diffusion music video)
This video is the first thing I made that I wanted to share with more than just my family. I made most of it from the lyrics at 50 frames per verse. I interpolated from one verse to the next: “current_verse:1.0 AND next_verse:0.0” in 50 steps to “current_verse:0.0 AND next_verse:1.0” using a custom script I wrote for the automatic1111 repo. I started from the loopback color correction script (no longer needed! hurray for VAE!) and added code to allow an eight-parameter “move” that repositions each corner of the next image. It also interpolates between an initial move and an end move, so that the transitions are smooth.
- Stable Diffusion links from around October 5, 2022 that I collected for further processing
-
Good news: VAE prevents the loopback magenta skew!
Historically, loopbacks have suffered from a major colour skew problem, resulting in most UIs providing some kind of colour correction post processing step as an option. You can see some examples of the problem and workaround here. This didn't go away with the 1.5 model.
-
Experimenting with video, feedback would be great
As an aside, looks like this video is hitting the magenta/cyan skew issue that emerges when you do loopbacks with certain params. You could look into colour correction techniques to avoid this. See https://github.com/rewbs/stable-diffusion-loopback-color-correction-script for more info.
- User script to provide advanced colour correction options for img2img loopback for AUTOMATIC1111/stable-diffusion-webui
sd-parseq
-
Subject rotation with deforum
This is how it was done https://github.com/rewbs/sd-parseq/discussions/106
-
Google researchers achieve performance breakthrough, rendering Stable Diffusion images in sub-12 seconds on a mobile phone. Generative AI models running on your mobile phone is nearing reality.
rewbs/sd-parseq (github.com)
-
Music visualisation + stable diffusion + lots of animation parameter tweaking (method in comments)
sd-parseq for parameter control / keyframing.
- List of SD Tutorials & Resources
-
Rêverie, variation 91: revisiting a nightmare (Stable Diffusion using Deforum + Parseq, no editing)
I need to modify Parseq to add a way for you to be able to specify absolute values and then render deltas/slopes for use in loopback-based animations. Otherwise it's too hard to say e.g. "I want a 180 degree rotation over 16 beats": you have to figure out what increments will add up to 180 manually.
-
Good news: VAE prevents the loopback magenta skew!
Cool! Correct me if I'm wrong, but isn't that kind of animation essentially loopback with incremental transforms on each frame before feeding back into SD? If so, that's why I'm so excited about the colour improvements too (I'm the author of sd-parseq, a tool for SD animations: https://github.com/rewbs/sd-parseq)! :)
-
Quick demo of sd-parseq for a1111: cycling through some famous faces with oscillating prompt weights, denoising strength and zoom using sd-parseq (details in comment)
As described in this post, I've been working on a script for automatic1111 UI with its own separate companion UI for "sequencing" parameter changes over multiple generations, resulting in interesting videos that you can control precisely. This video gives a quick & dirty demo to give you a better idea of what it does. The output described in that video looks like this. If you're feeling adventurous you can check out the param flows directly in the parseq UI here (NB: only tested in Chrome so far, uses massive URLs which some browsers don't like). The code is here: https://github.com/rewbs/sd-parseq
-
"Parameter Sequencer" for Automatic1111 WebUI
I've been playing with the idea of a custom script + companion UI to give fine grained control over various parameters when generating videos. It's still very rough but I think it's just about ready to share: https://github.com/rewbs/sd-parseq
What are some alternatives?
dreambooth-docker
deforum-for-automatic1111-webui - Deforum extension script for AUTOMATIC1111's Stable Diffusion webui [Moved to: https://github.com/deforum-art/sd-webui-deforum]
AI-Horde - A crowdsourced distributed cluster for AI art and text generation
unprompted - Templating language written for Stable Diffusion workflows. Available as an extension for the Automatic1111 WebUI.
stable-diffusion-webui - Stable Diffusion web UI
StylePile - A prompt generation helper script for AUTOMATIC1111/stable-diffusion-webui and compatible forks
Stable-diffusion-webui-video
artbot-for-stable-diffusion - A front-end GUI for interacting with the AI Horde / Stable Diffusion distributed cluster
sd-akashic - A compendium of informations regarding Stable Diffusion (SD)
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.