ECCV2022-RIFE
VideoRenderer
ECCV2022-RIFE | VideoRenderer | |
---|---|---|
12 | 41 | |
4,090 | 1,138 | |
2.0% | - | |
5.8 | 9.1 | |
2 months ago | 3 months ago | |
Python | C++ | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ECCV2022-RIFE
-
AI Frame interpolation Question
Check out RIFE.
-
Enhancing ControlNet-m2m Video Smoothness with Multi-Level Frame Interpolation
Using Flowframes with the RIFE model, run 2x interpolation on a folder of video frames.
-
New NVIDIA Driver with RTX Video Super Resolution is Now Available!
Personally I have mine set to use RIFE AI via TensorRT for frame interpolation(x2), if the FPS is 30 or less.
-
I just added ControlNet BATCH support in automatic1111 webui and ControlNet extension, and here's the result. Read comment to support the Pull Requests so you can use this technique as soon as possible.
Oh now that I saw this comment, I started to investigate frame interpolation techniques using AI and found this: https://github.com/megvii-research/ECCV2022-RIFE
-
How can indie devs make 2d animations quickly, or streamline the process?
Yes but you need to use a different AI first. There are multiple AI like RIFE (there are apps for it if you don't like code) that will smooth out your animation. Then you can use those frames with Novel AI to get a more organic look in the end.
-
ECCV2022-RIFE VS FluidFrames.RIFE - a user suggested alternative
2 projects | 4 Feb 2023
-
Inpainting every frame using AE + SD
To have more smooth effect you can reduce frame per second and add FILM or RIFE between frames.
-
I inserted myself into stable diffusion, not perfect but it kinda looks my face
Interpolated with https://github.com/megvii-research/ECCV2022-RIFE
-
Stable Diffusion Animation
Sure! This would be my approach (and tools) if I was smarter:
If you make the generations with some similarities and use the right interpolation, you don't need 1000 images like my video and can obtain a smooth movement.
First, generate images with some kind of visual anchor (background, an object). You can use frames generated using the previous frame as reference image, or the same seed but different prompt/parameters, or you can go wild using img2img/inpainting (btw I struggle to find an inpainting tool for Stable Diffusion: they seem to be just img2img with a mask, without contest).
Then pass the generated images to one of the most recent interpolation algorithms, like this one https://github.com/megvii-research/ECCV2022-RIFE or the one used in the replicate we are commenting on (someone posted this reference: https://github.com/google-research/frame-interpolation )
The first link reports some free and paid implementation and a Colab, so depending on how deep you want to go, you have a lot of choices.
In the end, I'd use some good app to stabilize the image if needed, to get a more "calm" look. I use Luma Fusion, but it's a paid app (cheap, one-time payment, for iOS). I'm sure there are a ton of open-source implementations.
It's an approach similar to the animation on replicate, but it allows a lot of fine-tuning and you can add new animation ideas/tools to the process.
Nothing revolutionary, but I hope it helps!
> You have generated some pretty cool designs.
Thanks! I put in a lot of work in the last weeks. The project has a mission, I wrote something, but it's not ready yet. I believe it will be with the launch of Dall-E 8 :-/
-
Help with interpolating "missing" frames from source video
You'd probably get way better results by using something like RIFE to do interpolation and recreate missing frames, instead of minterpolate. I understand though that it's more effort as you'll need to install and setup RIFE.
VideoRenderer
-
Anyone using Nvidia Super Resolution to upscale VLC SBS videos to 4k?
Update: I got it working with MPC-BE 1.6.6 and this extension: https://github.com/emoose/VideoRenderer/releases/tag/rtx-1.1 It is no game changer for 3D VR, but I does feel slightly sharper. But for 480p anime it is a huge difference, but that is not something I watch in VR anyways.
-
In recent update, Firefox finally added support for the Nvidia Video Super Resolution, catching up to other chromium web browsers and software
Releases · emoose/VideoRenderer (github.com)
- Is there a way I can run Nvidia Shield software on my PC without buying the "Nvidia Shield" unit?
-
Question about AI upscaling
Recent drivers for 30 and 40 series added VSR, or Video Super Resolution, which upscales all videos in chromium browsers and some other media players that added support for it like MPC and VLC. I recommend following this guide for MPC as VLC is still in development and has a huge memory leak right now https://github.com/emoose/VideoRenderer/releases
-
Live Upscale Videos On PC/Steam Deck
NVIDIA's AI upscaling technique, requires: GeForce RTX 40 or 30 Series GPU. If you have mediaplayer classic black edition MPC-BE + emoose-VideoRenderer. For PC that's what I ended up using.
-
NVIDIA RTX Video Super Resolution is now supported by VLC media player - VideoCardz.com
I don’t think you’re correct, that one is in “maintenance mode” with the last release in January while emoose’ fork has been constantly adding new features and is much more commonly downloaded https://github.com/emoose/VideoRenderer/releases/tag/rtx-1.1
-
Impressive 8K upscaling with Topaz AI. Recently I played with Topaz AI, tried their models; Gaia AI and Proteus. I really liked Gaia AI but Proteus was better in terms of removing noises and restoring from compression artifacts. Attached images are 100% crop of 4K(original) and 8K(upscaled).
I haven’t messed around with 4K to 8K (might not even be available), but the new-ish nVidia Video SuperRes modded into MPC has been pretty wild for upscaling blurry VR content, considering it does it in real time.
-
VSR on downloaded videos
You already can with MPC-BE. check it out
-
Nvidia VSR upscaler
[https://github.com/emoose/VideoRenderer/releases/tag/rtx-1.0](https://github.com/emoose/VideoRenderer/releases/tag/rtx-1.0)
-
Is there an open source project to upscale my local video file with DLSS?
If you are looking to upscales the file themselves, I don't really know any free Video AI Upscale app, but if you want to use RTX Super Resolution on video playback, although I don't know if or when stuff like MPC or VLC would be able to implement it in their video player, I know that there is a person that was able to make it all work on any fork of Media Player Classic either it be MPC-HC(k-Lite codec pack), MPC-BE or etc! You can get it there; https://github.com/emoose/VideoRenderer/releases/tag/rtx-1.0
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
mpv-upscale-2x_animejanai - Real-time anime upscaling to 4k in mpv with Real-ESRGAN compact models
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
cupscale - Image Upscaling GUI based on ESRGAN
sd-webui-controlnet - WebUI extension for ControlNet
HTWebRemote - Simple remote control of your home theater devices and HTPC from any web browser
arXiv2021-RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation [Moved to: https://github.com/hzwer/ECCV2022-RIFE]
ytdl-patched - yt-dlp fork with some more features
txt2mask - Automatically create masks for Stable Diffusion inpainting using natural language.
FSRCNN-TensorFlow - An implementation of the Fast Super-Resolution Convolutional Neural Network in TensorFlow
AnimeGANv2 - [Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime
Anime4K - A High-Quality Real Time Upscaler for Anime Video