artistic-videos
TemporalKit
artistic-videos | TemporalKit | |
---|---|---|
6 | 14 | |
1,746 | 1,870 | |
- | - | |
0.0 | 5.8 | |
over 6 years ago | 2 months ago | |
C++ | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
artistic-videos
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
It's been figured out back in the GAN days, then applied to disco diffusion, and then finally stable warp diffusion, although locked behind a patreon paywall. There are also extensions for A1111 Webui like this temporal kit but it's mostly based on ebsynth and doesn't do true temporal warping that I have in mind with these other links.
-
bi-directional img2img , is this possible to implement?
What you are thinking of is called "temporal coherence", and it was used all the way back in 2016 to create videos with neural style transfer. Example: https://github.com/manuelruder/artistic-videos
- [D] What are some cool projects for generating art?
-
old school work
Mostly using this repo: https://github.com/manuelruder/artistic-videos
-
Developing an after effects plugin for deep dreaming. Here are some first renders. Took 20 mins to render 790 frames (each time). But I didn't found any way to control the Optical Flow (Check Comments)
You should definitely have the option to toggle optical flow on/off in your plugin if this is what it looks like with it off. I've come across it before while using this old beauty, but I'm guessing that is the old and messy version you mentioned further up the thread.
-
Can someone explain how? I know its style transfer with optical flow but don't know about the tools to create something like this. 🤯
There are a lot of works in video style transfer (ex: https://github.com/manuelruder/artistic-videos, https://github.com/manuelruder/fast-artistic-videos, https://github.com/sunshineatnoon/LinearStyleTransfer) but with any of this you wont achive such quality out of the box. Video above is a commercial product with a lot of tricks hidden inside it, witch only creators are aware of.
TemporalKit
-
Trying Shuffle Dance made with Temporal Kit + EBSynth + edgeOfRealism
Used instructions from Tokyo_Jab here. Used Temporal Kit for converting video to grid and back to video (with EBSynth). Below are the steps I took to convert this video.
-
I must say, this AI animation is truly remarkable. I am curious about how they did it.
Temporal Toolkit implements what you describe. It found it useful to create the sprite and cut it up after.
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
It's been figured out back in the GAN days, then applied to disco diffusion, and then finally stable warp diffusion, although locked behind a patreon paywall. There are also extensions for A1111 Webui like this temporal kit but it's mostly based on ebsynth and doesn't do true temporal warping that I have in mind with these other links.
-
Arima Kana OshinoKo Dance Video
Install TemporalKit https://github.com/CiaraStrawberry/TemporalKit in the stable(Extensions -> Install from URL (tab) -> copy past the link -> Install -> Apply and restart UI (installed tab)
-
Panda.
TemporalKit Or Tokyo jab's method has always helped me
-
Animating a character in SD
Here's a video for what you're asking https://www.youtube.com/watch?v=W3P8Av3YW8I. More complex animations https://www.reddit.com/r/StableDiffusion/comments/11zeb17/tips_for_temporal_stability_while_changing_the/. There's a new extension for this: https://github.com/CiaraStrawberry/TemporalKit
- Speaking animation different styles test ( SD + TemporalKit + ebsynth )
-
Test TemporalKit v1.3/EBsynth, Alita Battle Angel 2019 Movie convert into Anime
<3 (make sure to update to the most recent version, that slight jump at the end of each scene is fixed in the github).
-
ControlNet extension now natively supports multi-unit batch folders in txt2img and img2img, as well as batch loopback for TemporalNet
For a more complete solution I suggest you have a look at temporal kit: https://github.com/CiaraStrawberry/TemporalKit
-
A Few Good Women
Made with temporalkit https://github.com/CiaraStrawberry/TemporalKit
What are some alternatives?
StyleGAN-nada
sd-webui-mov2mov - This is the Mov2mov plugin for Automatic1111/stable-diffusion-webui.
neural-style-pt - PyTorch implementation of neural style transfer algorithm
After-Diffusion - A CEP Extension for Adobe After Effects that allows for seamless integration of the Stable Diffusion Web-UI.
flownet2-pytorch - Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
ebsynth - Fast Example-based Image Synthesis and Style Transfer
DeepDreamAnimV2 - Code is still under development
stable-diffusion-webui - Stable Diffusion web UI
AE_Stable-Diffision - The beginnings of a full after effects plugin that integrates Auto1111 Stable diffusion. Useable as a scriptUI for now.
dream-textures - Stable Diffusion built-in to Blender