seed_travel
sd-webui-controlnet
seed_travel | sd-webui-controlnet | |
---|---|---|
16 | 247 | |
302 | 16,105 | |
- | - | |
6.3 | 9.6 | |
11 months ago | 1 day ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
seed_travel
-
a short seed travel
Seed travel is a technique and and a script for A1111: https://github.com/yownas/seed_travel
-
Transmigrations concert visuals remixes
For the video it turned out a bit too "hairy" compared to many of the still images (I believe because of the long landscape aspect ratio), but I ran out of time to fiddle. I used the Seed Travel extension for the animation and ChaiNNer with the 4x-Valar upscaler.
-
Most useful extensions for beginners, except ControlNet
Seed Travel and Clip Interrogator extensions are both listen in the extensions tab of a1111, so thats the easiest route. But sure: https://github.com/yownas/seed_travel and https://github.com/pharmapsychotic/clip-interrogator-ext
-
What is the theoretical max number of images that stable diffusion can generated?
smooth latent space https://github.com/yownas/seed_travel
- Trying out some Stable Diffusion seed travel stuff
-
How to achieve this barely visible transition?
To stick with one prompt and slowly move to another seed, use this script instead https://github.com/yownas/seed_travel
-
Use the seed_travel extension for automatic1111 to make some excellent "flickerless" animations
Get the seed_travel extension by yownas. Follow the instructions to install it via the webui.
-
Chika - Seed Travel extension
I've added a new feature to https://github.com/yownas/seed_travel where you can select different "Interpolation rates". This one uses "Slow start"
-
Best Option for Large Digital Wall Display?
Compressing the videos has become quite a project that involves the seed_travel script, a little imagemagick, upscaling with realSR, an absolute ton of interpolation with RIFE, and the swiss army knife of video tools, ffmpeg.
- Interpolation with openai/guided-diffusion
sd-webui-controlnet
-
OpenPose ControlNet: A Beginner's Guide
A crucial step for achieving stable diffusion controlnet settings is the installation of the controlnet extension in Google Colab. Whether on a Windows PC or Mac, installing controlnet is vital for stable diffusion of human pose details. Additionally, updating the controlnet extension is necessary to maintain stability and achieve the desired results in OpenPose model. To install the v1.1 controlnet extension, go to the “extensions” tab and install it from this URL: https://github.com/Mikubill/sd-webui-controlnet. If you already have v1 controlnets installed, delete the folder from stable-diffusion-webui/extensions/. Install the v1.
-
StyleAligned node for ComfyUI
1.1.420 Image-wise ControlNet and StyleAlign
-
PATCHFUSION is really impressive. High resolution depth maps in 16bit. I've been waiting for this. https://github.com/zhyever/PatchFusion
I opened a request thread on ControlNet GitHub you can give a support : https://github.com/Mikubill/sd-webui-controlnet/issues/2319
- Going to lose my mind at this point with this problem
- Samples of style-aligned
-
Is it possible to outpaint with SD or SDXL as easy as with photoshop? (no prompts)
It has been possible for 7 months now
- Reference Only Broken (Can someone with a working Reference Only CN upload there extension folder)
-
Web app prototype to create controlnet segmentation maps for Stable Diffusion
I sometimes use a very similar technique in Cinema4d (here is a link to a c4d file with preset materials referencing proper colors for Semantic Segmentation if any other c4d user wants to try it), but yours is a much more accessible solution as it's free and it's accessible online.
-
Dalle-3 Examples
There are models available that give you more control - in some senses, at least.
For example, you can use Stable Diffusion with 'ControlNet' [1] where for example, you can input an 'openpose' to choose the pose of people in the scene.
There's also a 'Regional Prompter' [2] which lets you use different prompts for different areas of the image, giving you some control over the composition.
You can also use 'inpainting' to regenerate select parts of your image if, for example, you don't like the shape of the clouds.
Of course this stuff isn't perfect - for example, you'll get hands with the wrong number of fingers sometimes, no matter what you specify :)
[1] https://github.com/Mikubill/sd-webui-controlnet
- ControlNet SDXL for Automatic1111-WebUI official release: sd-webui-controlnet 1.1.400
What are some alternatives?
rife-ncnn-vulkan - RIFE, Real-Time Intermediate Flow Estimation for Video Frame Interpolation implemented with ncnn library
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
stable-diffusion-backend - Backend for my Stable diffusion project(s)
openpose-editor - Openpose Editor for AUTOMATIC1111's stable-diffusion-webui
pi_video_looper - Application to turn your Raspberry Pi into a dedicated looping video playback device, good for art installations, information displays, or just playing cat videos all day.
T2I-Adapter - T2I-Adapter
realsr-ncnn-vulkan - RealSR super resolution implemented with ncnn library
ControlNet - Let us control diffusion models!
batchlinks-webui - Download several Huggingface, MEGA, and CivitAI links at once. SD webui extension. For colab.
stable-diffusion-webui-colab - stable diffusion webui colab
stable-diffusion-webui-wildcards - Wildcards
stable-diffusion-webui - Stable Diffusion web UI