-
batchlinks-webui
Download several Huggingface, MEGA, and CivitAI links at once. SD webui extension. For colab.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
seed_travel
Small script for AUTOMATIC1111/stable-diffusion-webui to create images between two seeds
This downloads easily models, Loras, etc from different sources: https://github.com/etherealxx/batchlinks-webui
Canvas Zoom (for inpainting): https://github.com/richrobber2/canvas-zoom.git
Not quite for beginners but maybe, depends on how deep you wanna dive, regional prompter is worth a shot. I still have a problem understanding how setting the regions works because i am a simple man, but i did manage to set up some simple stuff and make it work. What it does is it separates your output in rectangles based on proportion (like horizontal 1,1 will split your output horizontally in halves and you can specify different prompts for each half; 1,1,1 will split it in 3 equal parts, and 1,2,1 will split 25%, 50%, 25% and so on; you can split horizontally and vertically at the same time and it can get as complex as you want or your poor pure soul can endure). But there's a tutorial now, I need to look into that more and make sense out of it. You can find it here Zoom canvas is nice, a bit finnicky but definitely useful. Tiled VAE seems to kinda work a bit, I seem to be able to do hires fix 2x on my 6gb 1660 ti card, but when using controlnet it kinda caps it to 1.2x or so, need to experiment more. It should be able to split your renders into tiles and work on them individually thus reducing the use of your vram, something like that. For wildcarding i use thi simple extension here, i just use chatgpt to generate me lists of words (locations, hairstyles, outfits, nationalities etc.), paste them into txt files and promp them as location if the text file is named location.txt and SD will randomly use them as tokens. I know there's dynamic prompting but i didn't get the time to look into that yet. Multi diffusion and composable lora are some others you can look up to, they seem to work nice with the regional prompter. Composable lora should make you able to use multiple loras on different regions of your output (like, an anime Ghibli character and a realistic Gal Gadot character on an oil painted background. Wow it took me half an hour to type this on my phone, hope it helps 😅
Not quite for beginners but maybe, depends on how deep you wanna dive, regional prompter is worth a shot. I still have a problem understanding how setting the regions works because i am a simple man, but i did manage to set up some simple stuff and make it work. What it does is it separates your output in rectangles based on proportion (like horizontal 1,1 will split your output horizontally in halves and you can specify different prompts for each half; 1,1,1 will split it in 3 equal parts, and 1,2,1 will split 25%, 50%, 25% and so on; you can split horizontally and vertically at the same time and it can get as complex as you want or your poor pure soul can endure). But there's a tutorial now, I need to look into that more and make sense out of it. You can find it here Zoom canvas is nice, a bit finnicky but definitely useful. Tiled VAE seems to kinda work a bit, I seem to be able to do hires fix 2x on my 6gb 1660 ti card, but when using controlnet it kinda caps it to 1.2x or so, need to experiment more. It should be able to split your renders into tiles and work on them individually thus reducing the use of your vram, something like that. For wildcarding i use thi simple extension here, i just use chatgpt to generate me lists of words (locations, hairstyles, outfits, nationalities etc.), paste them into txt files and promp them as location if the text file is named location.txt and SD will randomly use them as tokens. I know there's dynamic prompting but i didn't get the time to look into that yet. Multi diffusion and composable lora are some others you can look up to, they seem to work nice with the regional prompter. Composable lora should make you able to use multiple loras on different regions of your output (like, an anime Ghibli character and a realistic Gal Gadot character on an oil painted background. Wow it took me half an hour to type this on my phone, hope it helps 😅
Seed Travel and Clip Interrogator extensions are both listen in the extensions tab of a1111, so thats the easiest route. But sure: https://github.com/yownas/seed_travel and https://github.com/pharmapsychotic/clip-interrogator-ext
Seed Travel and Clip Interrogator extensions are both listen in the extensions tab of a1111, so thats the easiest route. But sure: https://github.com/yownas/seed_travel and https://github.com/pharmapsychotic/clip-interrogator-ext
Related posts
-
Stable Diffusion can't stop generating extra torsos, even with negative prompt. Any suggestions?
-
Reduce Or Remove The Use Of RAM In Image Generation
-
Working Colab notebooks for training Dreambooth?
-
How I do I fix these boxes/lines appearing while using Ultimate SD upscale + CN tiles? All the details are in my comment below. Please Helps. MANY THANKS !!!
-
Has anyone managed to get TensorRT working in ComfyUI on Windows?