sd-webui-mov2mov
This is the Mov2mov plugin for Automatic1111/stable-diffusion-webui. (by Scholar01)
TemporalKit
An all in one solution for adding Temporal Stability to a Stable Diffusion Render via an automatic1111 extension (by CiaraStrawberry)
sd-webui-mov2mov | TemporalKit | |
---|---|---|
4 | 14 | |
2,060 | 1,870 | |
- | - | |
7.1 | 5.8 | |
2 months ago | 2 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-webui-mov2mov
Posts with mentions or reviews of sd-webui-mov2mov.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-01.
-
Mov2mov and deflicker
I’ve been messing around with video creation within Auto1111 and wanted to share the process I’m using. I started by installing this extension within auto1111 GUI: https://github.com/Scholar01/sd-webui-mov2mov. Once installed, you’ll see a new tab to create diffused video. I then used a screenshot from the video I want to diffuse within img2img to figure out the effect I want to apply to the video. I used FFMPEG to extract a single video frame. Once I get the effect I want, I copy/paste the settings over to the mov2mov tab. NOTE: I don’t know why, but I get blurry video when using samplers other than euler_a. There may be others that work but I haven’t looked into it further. In the attached video, I used controlnet (openpose full and lineart). I set the denoising to .3 and steps to 20. I set the movie frame to 15. I noticed that if I set the generate movie mode to mp4v, it avoids an error at the end of the process that prevents the video from being written. Once that is done, I process the video with another tool (a command line tool, not in auto1111) found here: https://github.com/ChenyangLEI/All-In-One-Deflicker. For a free tool, it produces decent results.
-
TikTok girl‘s hot dancing.
mov2mov extension
- Temporal cohesion mov2mov vs TemporalKit
-
The secret to REALLY easy videos in A1111 (easier than you think)
Download this extension into your A1111 via the URL link, credits to the OG Scholar01, who seems to be of eastern asia descent and probably has little english skills, but kudos to him anyway. https://github.com/Scholar01/sd-webui-mov2mov.git Optional - this repo also has ModNet support - it's like automatic AI roto for your video, I tried it once, results were kinda meh, but hey, it's something.. There are links for chinese? file hosters but are paid, so I'm making it easier for you by re-uploading To download models for ModNet, go here https://www.mediafire.com/file/j140lpjn3xfabhb/modnet_photographic_portrait_matting.ckpt/file https://www.mediafire.com/file/4xjrylvr41pq6sk/modnet_webcam_portrait_matting.ckpt/file
TemporalKit
Posts with mentions or reviews of TemporalKit.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-17.
-
Trying Shuffle Dance made with Temporal Kit + EBSynth + edgeOfRealism
Used instructions from Tokyo_Jab here. Used Temporal Kit for converting video to grid and back to video (with EBSynth). Below are the steps I took to convert this video.
-
I must say, this AI animation is truly remarkable. I am curious about how they did it.
Temporal Toolkit implements what you describe. It found it useful to create the sprite and cut it up after.
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
It's been figured out back in the GAN days, then applied to disco diffusion, and then finally stable warp diffusion, although locked behind a patreon paywall. There are also extensions for A1111 Webui like this temporal kit but it's mostly based on ebsynth and doesn't do true temporal warping that I have in mind with these other links.
-
Arima Kana OshinoKo Dance Video
Install TemporalKit https://github.com/CiaraStrawberry/TemporalKit in the stable(Extensions -> Install from URL (tab) -> copy past the link -> Install -> Apply and restart UI (installed tab)
-
Panda.
TemporalKit Or Tokyo jab's method has always helped me
-
Animating a character in SD
Here's a video for what you're asking https://www.youtube.com/watch?v=W3P8Av3YW8I. More complex animations https://www.reddit.com/r/StableDiffusion/comments/11zeb17/tips_for_temporal_stability_while_changing_the/. There's a new extension for this: https://github.com/CiaraStrawberry/TemporalKit
- Speaking animation different styles test ( SD + TemporalKit + ebsynth )
-
Test TemporalKit v1.3/EBsynth, Alita Battle Angel 2019 Movie convert into Anime
<3 (make sure to update to the most recent version, that slight jump at the end of each scene is fixed in the github).
-
ControlNet extension now natively supports multi-unit batch folders in txt2img and img2img, as well as batch loopback for TemporalNet
For a more complete solution I suggest you have a look at temporal kit: https://github.com/CiaraStrawberry/TemporalKit
-
A Few Good Women
Made with temporalkit https://github.com/CiaraStrawberry/TemporalKit
What are some alternatives?
When comparing sd-webui-mov2mov and TemporalKit you can also consider the following projects:
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
After-Diffusion - A CEP Extension for Adobe After Effects that allows for seamless integration of the Stable Diffusion Web-UI.
All-In-One-Deflicker - [CVPR2023] Blind Video Deflickering by Neural Filtering with a Flawed Atlas
ebsynth - Fast Example-based Image Synthesis and Style Transfer
OpenH264 - Open Source H.264 Codec
stable-diffusion-webui - Stable Diffusion web UI
artistic-videos - Torch implementation for the paper "Artistic style transfer for videos"
AE_Stable-Diffision - The beginnings of a full after effects plugin that integrates Auto1111 Stable diffusion. Useable as a scriptUI for now.
dream-textures - Stable Diffusion built-in to Blender