seed_travel
stable-diffusion-webui-wildcards
seed_travel | stable-diffusion-webui-wildcards | |
---|---|---|
16 | 20 | |
302 | 408 | |
- | - | |
6.3 | 2.2 | |
11 months ago | about 1 month ago | |
Python | Python | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
seed_travel
-
a short seed travel
Seed travel is a technique and and a script for A1111: https://github.com/yownas/seed_travel
-
Transmigrations concert visuals remixes
For the video it turned out a bit too "hairy" compared to many of the still images (I believe because of the long landscape aspect ratio), but I ran out of time to fiddle. I used the Seed Travel extension for the animation and ChaiNNer with the 4x-Valar upscaler.
-
Most useful extensions for beginners, except ControlNet
Seed Travel and Clip Interrogator extensions are both listen in the extensions tab of a1111, so thats the easiest route. But sure: https://github.com/yownas/seed_travel and https://github.com/pharmapsychotic/clip-interrogator-ext
-
What is the theoretical max number of images that stable diffusion can generated?
smooth latent space https://github.com/yownas/seed_travel
- Trying out some Stable Diffusion seed travel stuff
-
How to achieve this barely visible transition?
To stick with one prompt and slowly move to another seed, use this script instead https://github.com/yownas/seed_travel
-
Use the seed_travel extension for automatic1111 to make some excellent "flickerless" animations
Get the seed_travel extension by yownas. Follow the instructions to install it via the webui.
-
Chika - Seed Travel extension
I've added a new feature to https://github.com/yownas/seed_travel where you can select different "Interpolation rates". This one uses "Slow start"
-
Best Option for Large Digital Wall Display?
Compressing the videos has become quite a project that involves the seed_travel script, a little imagemagick, upscaling with realSR, an absolute ton of interpolation with RIFE, and the swiss army knife of video tools, ffmpeg.
- Interpolation with openai/guided-diffusion
stable-diffusion-webui-wildcards
-
ComfiUI agents for automated promting, Art Direction, Critic, etc,
for prompter you probably want to somehow implement https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards and maybe have an LLM have a prompt to auto fill a bunch of them with some randomness, probably some kind of procedural formula like "_subject_ _action_ _object_ in _place_ in _style_" or something similar, and then have it do mad libs basically, then write it to a text file or something then have that text file populate the wildcards. Could probably just write a script to scrape a bunch of prompts from images on civitai too.
-
Need help with ideas
First step: install the dynamic prompt extension (you may like some of its extra features as they fall directly in the type of work you are doing) or the barebones Wildcard extension that was initially released by A1111 himself.
-
Dnyamic Prompts: Wildcard File categories, subcategories, folders?
I disabled dynamic prompts and installed the stable-diffusion-webui-wildcards extension-- which I used to use before dynamic prompts added wildcards to their functions. I'm much happier with it. I wasn't using any of the other dynamic prompts capabilities anyway.
-
New Python Script for randomizing prompts, with hundreds of variables
I mean, "for ages", we have already the wildcards extension (https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards) together with the dynamic prompts (https://github.com/adieyal/sd-dynamic-prompts) and I wanted to know, what's the difference to your script, respectively, what does your script different/better than those extensions?
-
Most useful extensions for beginners, except ControlNet
Not quite for beginners but maybe, depends on how deep you wanna dive, regional prompter is worth a shot. I still have a problem understanding how setting the regions works because i am a simple man, but i did manage to set up some simple stuff and make it work. What it does is it separates your output in rectangles based on proportion (like horizontal 1,1 will split your output horizontally in halves and you can specify different prompts for each half; 1,1,1 will split it in 3 equal parts, and 1,2,1 will split 25%, 50%, 25% and so on; you can split horizontally and vertically at the same time and it can get as complex as you want or your poor pure soul can endure). But there's a tutorial now, I need to look into that more and make sense out of it. You can find it here Zoom canvas is nice, a bit finnicky but definitely useful. Tiled VAE seems to kinda work a bit, I seem to be able to do hires fix 2x on my 6gb 1660 ti card, but when using controlnet it kinda caps it to 1.2x or so, need to experiment more. It should be able to split your renders into tiles and work on them individually thus reducing the use of your vram, something like that. For wildcarding i use thi simple extension here, i just use chatgpt to generate me lists of words (locations, hairstyles, outfits, nationalities etc.), paste them into txt files and promp them as location if the text file is named location.txt and SD will randomly use them as tokens. I know there's dynamic prompting but i didn't get the time to look into that yet. Multi diffusion and composable lora are some others you can look up to, they seem to work nice with the regional prompter. Composable lora should make you able to use multiple loras on different regions of your output (like, an anime Ghibli character and a realistic Gal Gadot character on an oil painted background. Wow it took me half an hour to type this on my phone, hope it helps 😅
- Using random unrelated words as prompt
-
Civitai question, what the hell are wildcards?
Probably related to this. https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards
-
Prompts to generate large variety of clothing?
You are looking for Dynamic Prompts in use with Wildcards.
-
How do I vary images in a batch run
Check out the stable-diffusion-webui-wildcards extension for A1111 webui.
-
How to combine prompt from file or textbox script and the normal prompt?
I guess Wildcards or Dynamic Prompts might be what you want.
What are some alternatives?
rife-ncnn-vulkan - RIFE, Real-Time Intermediate Flow Estimation for Video Frame Interpolation implemented with ncnn library
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
stable-diffusion-backend - Backend for my Stable diffusion project(s)
canvas-zoom - zoom and pan functionality
pi_video_looper - Application to turn your Raspberry Pi into a dedicated looping video playback device, good for art installations, information displays, or just playing cat videos all day.
stable-diffusion-webui-Prompt_Generator - An extension to AUTOMATIC1111 WebUI for stable diffusion which adds a prompt generator
realsr-ncnn-vulkan - RealSR super resolution implemented with ncnn library
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
batchlinks-webui - Download several Huggingface, MEGA, and CivitAI links at once. SD webui extension. For colab.
StableDiffusion - Sd repo
prompt-interpolation-script-for-sd-webui