AnimateDiff
frame-interpolation
AnimateDiff | frame-interpolation | |
---|---|---|
9 | 74 | |
9,156 | 2,711 | |
- | 1.4% | |
8.0 | 0.0 | |
about 1 month ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AnimateDiff
- Animediff ā turn images into videos by AI
- Going to lose my mind at this point with this problem
-
[P] Do you want to join a motley crew who are scaling/retraining AnimateDiff for open source? AD trainer code just released!
POM from Banodoco.ai/Steerable Motion here. A bunch of closed-source companies are building on top of Animatediff - for example, Kaiber.ai launched an impressive image2video tool - and others are working towards scaling it.
-
AnimateDiff is pretty cool to mess around with
No clue if shorts embed properly. If not I can always post the video file when Iām at my computer later today GitHub: https://github.com/guoyww/AnimateDiff
-
Future of AI Video generation. PIKA LABS.
Isn't this just AnimateDiff?
- Okay, that's Ai but how?
-
AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
GitHub repo has been updated: guoyww/AnimateDiff: Official implementation of AnimateDiff. (github.com)
- AnimateDiff
frame-interpolation
-
Aging with AI from age 9 to age 99.
- Lastly I used FILM, an image interpolation library to interpolate between images
-
AnimDiff
1) generate video using https://github.com/camenduru/animatediff 2) upscale using SD-CN https://github.com/volotat/SD-CN-Animation 3) interpolate frames using https://github.com/google-research/frame-interpolation 4) add audio using https://huggingface.co/spaces/suno/bark
-
What is the current best way to make sequence images for animation that keep the art style consistent?
I am aware of interpolation as well (https://github.com/google-research/frame-interpolation) where you give it two images and it generates the images in between to get there but not sure I have good enough images to attempt to use this yet.
-
The AI will make You an Anime in Real Time
Super neat though. With some interpolation (possibly this Google Research one I just found via ChatGPT), it wouldn't be too bad to dump a video in and have it process in the background.
- my older video, without controlnet or training
-
The secret to REALLY easy videos in A1111 (easier than you think)
FILM repo by Google Research, they made this very cool interpolation method, my favourite so far. It's a pain to set up, didn't manage to run it on my local machine, I'm not very smart, and I can't get "pip install tensorflow==2.6.2" to run on my Windows, so can't run the requirements, so can't run the script.. BUTTTT you can use colab here, and once you hook it up to your GDrive, you can change the path to your folder of images, and it will process and spit out the interpolated video for you. I only have free tier, and it took 16 minutes for the sample video.
-
Loopback Wave Workflows (FILM, AE, Flowframes)
FILM (Frame Interpolation for Large Motion)
-
More Loopback Wave + Flow, this time with realistic people
Edit: used this for the interpolation. Flow wasn't the correct word. https://github.com/google-research/frame-interpolation
-
Large Motion Frame Interpolation ā Google AI Blog
Also off-topic, but their github.io page has a bibtex snippet for anyone wanting to cite their work in their papers. I'm not an academic, but I still strangely appreciate the gesture.
-
AI Video to Fill Missing Frames/Smooth Animation?
FILM? https://film-net.github.io/
What are some alternatives?
SD-CN-Animation - This script allows to automate video stylization task using StableDiffusion and ControlNet.
ebsynth - Fast Example-based Image Synthesis and Style Transfer
AnimateDiff - Official implementation of AnimateDiff.
AnimeInterp - The code for CVPR21 paper "Deep Animation Video Interpolation in the Wild"
Thin-Plate-Spline-Motion-Model - [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
sd-webui-mov2mov - This is the Mov2mov plugin for Automatic1111/stable-diffusion-webui.
VQGAN-CLIP-Video - Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
optical.flow.demo - A project that uses optical flow and machine learning to detect aimhacking in video clips.
frame-interpolation - FILM: Frame Interpolation for Large Motion, In arXiv 2022.
ECCV2022-RIFE - ECCV2022 - Real-Time Intermediate Flow Estimation for Video Frame Interpolation
XVFI - [ICCV 2021, Oral 3%] Official repository of XVFI