sd-webui-regional-prompter
DAIN
sd-webui-regional-prompter | DAIN | |
---|---|---|
60 | 34 | |
1,394 | 8,126 | |
- | - | |
8.5 | 0.0 | |
about 1 month ago | over 1 year ago | |
Python | Python | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-webui-regional-prompter
-
Regional Prompting doesn't seem to be working a lot of the time
So I'm using the Regional Prompter extension https://github.com/hako-mikan/sd-webui-regional-prompter
- Dalle-3 Examples
- Stable Diffussion 1.5 Newbie Question about creating an image with 2 characters
-
"In summary, Stable Diffusion doesn’t really care about commas. But you can use them to organize your prompts for your own orderliness." (Link to quote below.) So... Is there a way to make SD care? To make it "understand" which words we put together to create meaning?
But using Automatic1111, this extension can define a region of the image where the prompt should apply: https://github.com/hako-mikan/sd-webui-regional-prompter
- Train SD for CAPTION WRITING? I'm tired of uploading hairstyle pics and got "male public hair"
- How to fix issue related to generate two guys when aspect ration isn't square?
-
A little bit of party after fighting each other in Smash bros (Text2img, controlnet, regional prompter, adetailer)
Second, install regional prompter and adetailer to automatic1111 webui.Next, go to setting>adetailer and change the "sort bounding boxes" from "none" to "left and right". This means that adetailer will inpaint our subjects starting from the very left to the right, allowing for greater control of what we want.
- What are some must-have/fun extensions or modules?
-
How to control a scene?
You can use ControlNets to control composition in various ways. You can use extensions like multidiffusion upscaler and regional prompter to control the layout of a scene. You can also inpaint details into a scene with the arrangement you want.
- Is there a way to guarantee one model in the image?
DAIN
-
Projects of AI tools for creating inbetween frames of 2D animations
Here is the last one I played with, but click the link above as there are newer models: https://github.com/baowenbo/DAIN
-
Smooth animation with controlnet and regional prompter
Not OP and unsure it would work well in this case, but I usually reach for DAIN to do a few frames of interpolation - https://github.com/baowenbo/DAIN
- FILM: Frame Interpolation for Large Motion
-
Working with 15fps video
You may want to look in to DAIN https://sites.google.com/view/wenbobao/dain
-
"Time to go outside" - An animation i've made using the outpainting tool + After Effects
Not sure if my eyes are playing tricks on me or if the video is slightly choppy. Maybe trying running it through DAIN to make it buttery smooth: https://github.com/baowenbo/DAIN
-
Using AI to smooth my footage
Nah this one is actually machine learning. There's plenty of machine learning interpolation networks out there. DAIN is just one of them. https://github.com/baowenbo/DAIN
-
I made an animation of 80 variations of 2 Midjourney portraits
This is DAIN https://github.com/baowenbo/DAIN
-
Help with 12.5FPS footage
You could try an AI like DAIN but I shudder to think how much VRAM you'd need to handle 12k footage, you'd probably need something like an RTX A6000, and even then I doubt that would be enough...
-
How to speed ramp?
If you've got suitable hardware, you could run the clips through DAIN or a similar AI interpolation suite to double the framerate - that'll get better results than Optical Flow.
-
[Haiku] 48fps animation
This exists, it's called DAIN. There are apps you can download which processes GIFs into 30fps, 60fps, etc. https://github.com/baowenbo/DAIN
What are some alternatives?
sd-webui-latent-couple - Latent Couple extension (two shot diffusion port)
arXiv2021-RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation [Moved to: https://github.com/hzwer/ECCV2022-RIFE]
stable-diffusion-webui-composable-lora - This extension replaces the built-in LoRA forward procedure.
video2x - A lossless video/GIF/image upscaler achieved with waifu2x, Anime4K, SRMD and RealSR. Started in Hack the Valley II, 2018.
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
Waifu2x-Extension-GUI - Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, Real-CUGAN, RTX Video Super Resolution VSR, SRMD, RealSR, Anime4K, RIFE, IFRNet, CAIN, DAIN, and ACNet.
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
mixture-of-diffusers - Mixture of Diffusers for scene composition and high resolution image generation
RealSR - Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Model (ICCV 2019)