frame-interpolation
ECCV2022-RIFE
Our great sponsors
frame-interpolation | ECCV2022-RIFE | |
---|---|---|
74 | 12 | |
2,672 | 4,057 | |
3.0% | 2.6% | |
0.0 | 5.8 | |
8 months ago | about 2 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
frame-interpolation
-
Aging with AI from age 9 to age 99.
- Lastly I used FILM, an image interpolation library to interpolate between images
-
AnimDiff
1) generate video using https://github.com/camenduru/animatediff 2) upscale using SD-CN https://github.com/volotat/SD-CN-Animation 3) interpolate frames using https://github.com/google-research/frame-interpolation 4) add audio using https://huggingface.co/spaces/suno/bark
-
What is the current best way to make sequence images for animation that keep the art style consistent?
I am aware of interpolation as well (https://github.com/google-research/frame-interpolation) where you give it two images and it generates the images in between to get there but not sure I have good enough images to attempt to use this yet.
-
The AI will make You an Anime in Real Time
Super neat though. With some interpolation (possibly this Google Research one I just found via ChatGPT), it wouldn't be too bad to dump a video in and have it process in the background.
- my older video, without controlnet or training
-
The secret to REALLY easy videos in A1111 (easier than you think)
FILM repo by Google Research, they made this very cool interpolation method, my favourite so far. It's a pain to set up, didn't manage to run it on my local machine, I'm not very smart, and I can't get "pip install tensorflow==2.6.2" to run on my Windows, so can't run the requirements, so can't run the script.. BUTTTT you can use colab here, and once you hook it up to your GDrive, you can change the path to your folder of images, and it will process and spit out the interpolated video for you. I only have free tier, and it took 16 minutes for the sample video.
-
Loopback Wave Workflows (FILM, AE, Flowframes)
FILM (Frame Interpolation for Large Motion)
-
More Loopback Wave + Flow, this time with realistic people
Edit: used this for the interpolation. Flow wasn't the correct word. https://github.com/google-research/frame-interpolation
-
Large Motion Frame Interpolation – Google AI Blog
Also off-topic, but their github.io page has a bibtex snippet for anyone wanting to cite their work in their papers. I'm not an academic, but I still strangely appreciate the gesture.
-
AI Video to Fill Missing Frames/Smooth Animation?
FILM? https://film-net.github.io/
ECCV2022-RIFE
-
AI Frame interpolation Question
Check out RIFE.
-
Enhancing ControlNet-m2m Video Smoothness with Multi-Level Frame Interpolation
Using Flowframes with the RIFE model, run 2x interpolation on a folder of video frames.
-
New NVIDIA Driver with RTX Video Super Resolution is Now Available!
Personally I have mine set to use RIFE AI via TensorRT for frame interpolation(x2), if the FPS is 30 or less.
-
I just added ControlNet BATCH support in automatic1111 webui and ControlNet extension, and here's the result. Read comment to support the Pull Requests so you can use this technique as soon as possible.
Oh now that I saw this comment, I started to investigate frame interpolation techniques using AI and found this: https://github.com/megvii-research/ECCV2022-RIFE
-
How can indie devs make 2d animations quickly, or streamline the process?
Yes but you need to use a different AI first. There are multiple AI like RIFE (there are apps for it if you don't like code) that will smooth out your animation. Then you can use those frames with Novel AI to get a more organic look in the end.
-
ECCV2022-RIFE VS FluidFrames.RIFE - a user suggested alternative
2 projects | 4 Feb 2023
-
Inpainting every frame using AE + SD
To have more smooth effect you can reduce frame per second and add FILM or RIFE between frames.
-
I inserted myself into stable diffusion, not perfect but it kinda looks my face
Interpolated with https://github.com/megvii-research/ECCV2022-RIFE
-
Stable Diffusion Animation
Sure! This would be my approach (and tools) if I was smarter:
If you make the generations with some similarities and use the right interpolation, you don't need 1000 images like my video and can obtain a smooth movement.
First, generate images with some kind of visual anchor (background, an object). You can use frames generated using the previous frame as reference image, or the same seed but different prompt/parameters, or you can go wild using img2img/inpainting (btw I struggle to find an inpainting tool for Stable Diffusion: they seem to be just img2img with a mask, without contest).
Then pass the generated images to one of the most recent interpolation algorithms, like this one https://github.com/megvii-research/ECCV2022-RIFE or the one used in the replicate we are commenting on (someone posted this reference: https://github.com/google-research/frame-interpolation )
The first link reports some free and paid implementation and a Colab, so depending on how deep you want to go, you have a lot of choices.
In the end, I'd use some good app to stabilize the image if needed, to get a more "calm" look. I use Luma Fusion, but it's a paid app (cheap, one-time payment, for iOS). I'm sure there are a ton of open-source implementations.
It's an approach similar to the animation on replicate, but it allows a lot of fine-tuning and you can add new animation ideas/tools to the process.
Nothing revolutionary, but I hope it helps!
> You have generated some pretty cool designs.
Thanks! I put in a lot of work in the last weeks. The project has a mission, I wrote something, but it's not ready yet. I believe it will be with the launch of Dall-E 8 :-/
-
Help with interpolating "missing" frames from source video
You'd probably get way better results by using something like RIFE to do interpolation and recreate missing frames, instead of minterpolate. I understand though that it's more effort as you'll need to install and setup RIFE.
What are some alternatives?
ebsynth - Fast Example-based Image Synthesis and Style Transfer
stable-diffusion-webui - Stable Diffusion web UI
AnimeInterp - The code for CVPR21 paper "Deep Animation Video Interpolation in the Wild"
sd-webui-controlnet - WebUI extension for ControlNet
sd-webui-mov2mov - This is the Mov2mov plugin for Automatic1111/stable-diffusion-webui.
arXiv2021-RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation [Moved to: https://github.com/hzwer/ECCV2022-RIFE]
VQGAN-CLIP-Video - Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.
VideoRenderer - RTX HDR modded into MPC-VideoRenderer.
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
txt2mask - Automatically create masks for Stable Diffusion inpainting using natural language.
optical.flow.demo - A project that uses optical flow and machine learning to detect aimhacking in video clips.
AnimeGANv2 - [Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime