ECCV2022-RIFE
DeOldify
ECCV2022-RIFE | DeOldify | |
---|---|---|
12 | 58 | |
4,090 | 17,624 | |
2.0% | - | |
5.8 | 2.7 | |
2 months ago | 7 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ECCV2022-RIFE
-
AI Frame interpolation Question
Check out RIFE.
-
Enhancing ControlNet-m2m Video Smoothness with Multi-Level Frame Interpolation
Using Flowframes with the RIFE model, run 2x interpolation on a folder of video frames.
-
New NVIDIA Driver with RTX Video Super Resolution is Now Available!
Personally I have mine set to use RIFE AI via TensorRT for frame interpolation(x2), if the FPS is 30 or less.
-
I just added ControlNet BATCH support in automatic1111 webui and ControlNet extension, and here's the result. Read comment to support the Pull Requests so you can use this technique as soon as possible.
Oh now that I saw this comment, I started to investigate frame interpolation techniques using AI and found this: https://github.com/megvii-research/ECCV2022-RIFE
-
How can indie devs make 2d animations quickly, or streamline the process?
Yes but you need to use a different AI first. There are multiple AI like RIFE (there are apps for it if you don't like code) that will smooth out your animation. Then you can use those frames with Novel AI to get a more organic look in the end.
-
ECCV2022-RIFE VS FluidFrames.RIFE - a user suggested alternative
2 projects | 4 Feb 2023
-
Inpainting every frame using AE + SD
To have more smooth effect you can reduce frame per second and add FILM or RIFE between frames.
-
I inserted myself into stable diffusion, not perfect but it kinda looks my face
Interpolated with https://github.com/megvii-research/ECCV2022-RIFE
-
Stable Diffusion Animation
Sure! This would be my approach (and tools) if I was smarter:
If you make the generations with some similarities and use the right interpolation, you don't need 1000 images like my video and can obtain a smooth movement.
First, generate images with some kind of visual anchor (background, an object). You can use frames generated using the previous frame as reference image, or the same seed but different prompt/parameters, or you can go wild using img2img/inpainting (btw I struggle to find an inpainting tool for Stable Diffusion: they seem to be just img2img with a mask, without contest).
Then pass the generated images to one of the most recent interpolation algorithms, like this one https://github.com/megvii-research/ECCV2022-RIFE or the one used in the replicate we are commenting on (someone posted this reference: https://github.com/google-research/frame-interpolation )
The first link reports some free and paid implementation and a Colab, so depending on how deep you want to go, you have a lot of choices.
In the end, I'd use some good app to stabilize the image if needed, to get a more "calm" look. I use Luma Fusion, but it's a paid app (cheap, one-time payment, for iOS). I'm sure there are a ton of open-source implementations.
It's an approach similar to the animation on replicate, but it allows a lot of fine-tuning and you can add new animation ideas/tools to the process.
Nothing revolutionary, but I hope it helps!
> You have generated some pretty cool designs.
Thanks! I put in a lot of work in the last weeks. The project has a mission, I wrote something, but it's not ready yet. I believe it will be with the launch of Dall-E 8 :-/
-
Help with interpolating "missing" frames from source video
You'd probably get way better results by using something like RIFE to do interpolation and recreate missing frames, instead of minterpolate. I understand though that it's more effort as you'll need to install and setup RIFE.
DeOldify
- Would someone be able to restore this image of my grandpa in the marines? Color and non if possible!
- Help achieving a look from B&W to color film
- Is there a way to colorize images like this using controlNet and without morphing his face? If yes, does anyone know how?
- controlnet is great to bring back life to old picture. This was one of the most dangerous job in my country. Moving logs on a river
- ControlNet for Automatic1111 is here!
-
I improved the colorization of the NYC 1911 film by combining multiple colorization neural networks. Here is a brief comparison to the previous method. I will leave a link to the full video in the comments for those who are interested.
The old method is simple. All you need to do is running DeOldify with a colab notebook. Then you can pick any neural networks you want for upscale and frame interpolation. A lot of people use dain-app, but I find rife faster and easier to use.
-
80 Blog Posts to Learn Computer Vision
Today, we’re talking to a very special “Software Guy, currently digging deep into GANs” — The author of DeOldify: Jason Antic.
- Znalazłem ostatnio AI do restaurowania i kolorowania starych zdjęć i tym oto sposobem udało mi się odnowić jedyne zdjęcie mojego pradziadka o którym praktycznie nic nie wiem (poza tym, że został rozstrzelany za wojny i był niezły babiarz)! Załączę linki w komentarzu. :) Krawat w cętki wymiata. :D
- Anyone know of a (preferably local) batch-image colourisation software?
-
Retro personal computer ads from the 1980s
I think the novel and interesting tech is still happening, its just that without the colorful ads for it on TV, and without the software being packaged up and sold with pretty box art that you can physically hold, it doesn't feel as much like a capital-E Experience. It's probably the Internet's fault that we don't do things like that anymore, but the upside is that we now have access to so many ideas and applications from all over, even ones that aren't commercially viable.
Some that look exciting to me are: an AI that lets you animate still photos realistically [1], a simple website that guides you to discover new parks, eateries, and other places near you [2], an AI that colorizes old black-and-white photos/video [3], a Street View style map of the game world from "The Legend of Zelda: Breath of the Wild", with some 1st person 360 degree photos [4], and a tiny game engine that lets you distribute your whole game physically via printed QR codes [5].
If marketing and graphic design people ever felt like getting together to do some 'side projects', I vote that they should make print ads for apps/websites that they like :)
[1] https://github.com/AliaksandrSiarohin/first-order-model
[2] https://randomlocation.xyz (https://randomlocation.xyz/help.txt for customization)
[3] https://github.com/jantic/DeOldify
[4] https://nassimsoftware.github.io/zeldabotwstreetview/
[5] https://github.com/kesiev/rewtro
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
sd-webui-controlnet - WebUI extension for ControlNet
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
Real-ESRGAN-ncnn-vulkan - NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.
ArtLine - A Deep Learning based project for creating line art portraits.
arXiv2021-RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation [Moved to: https://github.com/hzwer/ECCV2022-RIFE]
cnn-colorize - CNN Model to Colorize Grayscale Images
VideoRenderer - RTX HDR modded into MPC-VideoRenderer.
stylegan - StyleGAN - Official TensorFlow Implementation
txt2mask - Automatically create masks for Stable Diffusion inpainting using natural language.
colorize-photos - Colorize all the photos in a directory