stable-diffusion-webui-two-shot
DAIN
stable-diffusion-webui-two-shot | DAIN | |
---|---|---|
30 | 34 | |
694 | 8,064 | |
- | - | |
0.0 | 0.0 | |
12 months ago | about 1 year ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-two-shot
-
Discussion Thread
They do, it just needs more work. Stable Diffusion + Latent Couple + Composable LoRA let's you have an Elon model and a Mark model in different parts of the image. No Elon/Mark-ness dominating all the subjects.
-
ControlNet Reference-Only problems
https://github.com/opparco/stable-diffusion-webui-two-shothttps://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper https://github.com/opparco/stable-diffusion-webui-composable-lora https://github.com/thomasasfk/sd-webui-aspect-ratio-helper
-
[STABLE DIFFUSION] I need help with seperating characters. (LORA)
Possibly with something like the Latent Couple extension + Composable LoRA, cf this video.
-
I built the easiest-to-use desktop application for running Stable Diffusion on your PC - and it's free for all of you
Would love to see "two shot" (https://github.com/opparco/stable-diffusion-webui-two-shot) implemented into the ui into the future, so far your ui is very satisfying and easy to use. Keep up the good work!
-
Latent Couple Error
https://github.com/opparco/stable-diffusion-webui-two-shot/issues/54 found the solution, works for me right now.
-
Is there someway to make Latent Couple + Composable Lora work on Vlad Automatic webUI?
So i noticed that Latent Couple and Composable Lora don't work on Vlad Automatic (something to do with those extensions beind made for a outdated version of A1111 webUI).
-
Regional Prompter is a godsend- Midnight witch (2048x2560)
I try to use this and works ok, I just didn't try it much yet, prompting is became confusing for me. you can try it out and tell me what to do. area wight and general prompt got me confuse, sometimes work sometimes didn't, it's feels like we can just do two subject. idk
-
Need help with Regional Prompter
This is a tool for the Latent couple extension and I was asking about the Regional Prompter extension. I will give this one a try too.
-
Multiple characters on stable diffusion using masks
Sounds like the Latent Couple extension: https://github.com/opparco/stable-diffusion-webui-two-shot.git
- Link And Princess Zelda Share A Sweet Moment Together
DAIN
-
Projects of AI tools for creating inbetween frames of 2D animations
Here is the last one I played with, but click the link above as there are newer models: https://github.com/baowenbo/DAIN
-
Smooth animation with controlnet and regional prompter
Not OP and unsure it would work well in this case, but I usually reach for DAIN to do a few frames of interpolation - https://github.com/baowenbo/DAIN
- FILM: Frame Interpolation for Large Motion
-
Working with 15fps video
You may want to look in to DAIN https://sites.google.com/view/wenbobao/dain
-
"Time to go outside" - An animation i've made using the outpainting tool + After Effects
Not sure if my eyes are playing tricks on me or if the video is slightly choppy. Maybe trying running it through DAIN to make it buttery smooth: https://github.com/baowenbo/DAIN
-
Using AI to smooth my footage
Nah this one is actually machine learning. There's plenty of machine learning interpolation networks out there. DAIN is just one of them. https://github.com/baowenbo/DAIN
-
I made an animation of 80 variations of 2 Midjourney portraits
This is DAIN https://github.com/baowenbo/DAIN
-
Help with 12.5FPS footage
You could try an AI like DAIN but I shudder to think how much VRAM you'd need to handle 12k footage, you'd probably need something like an RTX A6000, and even then I doubt that would be enough...
-
How to speed ramp?
If you've got suitable hardware, you could run the clips through DAIN or a similar AI interpolation suite to double the framerate - that'll get better results than Optical Flow.
-
[Haiku] 48fps animation
This exists, it's called DAIN. There are apps you can download which processes GIFs into 30fps, 60fps, etc. https://github.com/baowenbo/DAIN
What are some alternatives?
sd-webui-latent-couple - Latent Couple extension (two shot diffusion port)
arXiv2021-RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation [Moved to: https://github.com/hzwer/ECCV2022-RIFE]
adetailer - Auto detecting, masking and inpainting with detection model.
video2x - A lossless video/GIF/image upscaler achieved with waifu2x, Anime4K, SRMD and RealSR. Started in Hack the Valley II, 2018.
sd-webui-controlnet - WebUI extension for ControlNet
Waifu2x-Extension-GUI - Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, Real-CUGAN, RTX Video Super Resolution VSR, SRMD, RealSR, Anime4K, RIFE, IFRNet, CAIN, DAIN, and ACNet.
stable-diffusion-webui-composable-lora - This extension replaces the built-in LoRA forward procedure.
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
sd-webui-regional-prompter - set prompt to divided region
RealSR - Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Model (ICCV 2019)
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
3D-Machine-Learning - A resource repository for 3D machine learning