AnimeGANv2
ECCV2022-RIFE
AnimeGANv2 | ECCV2022-RIFE | |
---|---|---|
4 | 12 | |
5,003 | 4,072 | |
- | 1.5% | |
0.0 | 5.8 | |
9 months ago | 2 months ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AnimeGANv2
-
Python Mini Projects
To turn your picture into animations, you can use the algorithm called AnimeGANv2, a deep learning algorithm that transforms real-world scenes into anime-style images. The technical nitty-gritty is beyond the scope of this blog, but if you’d like to learn more, feel free to check out this GitHub.
-
Kennedy VS Nixon, Part One (1960) | 4k | HFR | COLORIZED | ANIMATED.
Thank you. Photo-restore + Deoldify + Animegan + Rife + 4K upscale
-
An AI generated anime-style portrait for miyoung~
this one~ https://github.com/TachibanaYoshino/AnimeGANv2
-
Photo to Painting applications/notebooks?
https://github.com/TachibanaYoshino/AnimeGANv2 https://github.com/bryandlee/animegan2-pytorch
ECCV2022-RIFE
-
AI Frame interpolation Question
Check out RIFE.
-
Enhancing ControlNet-m2m Video Smoothness with Multi-Level Frame Interpolation
Using Flowframes with the RIFE model, run 2x interpolation on a folder of video frames.
-
New NVIDIA Driver with RTX Video Super Resolution is Now Available!
Personally I have mine set to use RIFE AI via TensorRT for frame interpolation(x2), if the FPS is 30 or less.
-
I just added ControlNet BATCH support in automatic1111 webui and ControlNet extension, and here's the result. Read comment to support the Pull Requests so you can use this technique as soon as possible.
Oh now that I saw this comment, I started to investigate frame interpolation techniques using AI and found this: https://github.com/megvii-research/ECCV2022-RIFE
-
How can indie devs make 2d animations quickly, or streamline the process?
Yes but you need to use a different AI first. There are multiple AI like RIFE (there are apps for it if you don't like code) that will smooth out your animation. Then you can use those frames with Novel AI to get a more organic look in the end.
-
ECCV2022-RIFE VS FluidFrames.RIFE - a user suggested alternative
2 projects | 4 Feb 2023
-
Inpainting every frame using AE + SD
To have more smooth effect you can reduce frame per second and add FILM or RIFE between frames.
-
I inserted myself into stable diffusion, not perfect but it kinda looks my face
Interpolated with https://github.com/megvii-research/ECCV2022-RIFE
-
Stable Diffusion Animation
Sure! This would be my approach (and tools) if I was smarter:
If you make the generations with some similarities and use the right interpolation, you don't need 1000 images like my video and can obtain a smooth movement.
First, generate images with some kind of visual anchor (background, an object). You can use frames generated using the previous frame as reference image, or the same seed but different prompt/parameters, or you can go wild using img2img/inpainting (btw I struggle to find an inpainting tool for Stable Diffusion: they seem to be just img2img with a mask, without contest).
Then pass the generated images to one of the most recent interpolation algorithms, like this one https://github.com/megvii-research/ECCV2022-RIFE or the one used in the replicate we are commenting on (someone posted this reference: https://github.com/google-research/frame-interpolation )
The first link reports some free and paid implementation and a Colab, so depending on how deep you want to go, you have a lot of choices.
In the end, I'd use some good app to stabilize the image if needed, to get a more "calm" look. I use Luma Fusion, but it's a paid app (cheap, one-time payment, for iOS). I'm sure there are a ton of open-source implementations.
It's an approach similar to the animation on replicate, but it allows a lot of fine-tuning and you can add new animation ideas/tools to the process.
Nothing revolutionary, but I hope it helps!
> You have generated some pretty cool designs.
Thanks! I put in a lot of work in the last weeks. The project has a mission, I wrote something, but it's not ready yet. I believe it will be with the launch of Dall-E 8 :-/
-
Help with interpolating "missing" frames from source video
You'd probably get way better results by using something like RIFE to do interpolation and recreate missing frames, instead of minterpolate. I understand though that it's more effort as you'll need to install and setup RIFE.
What are some alternatives?
AnimeGANv3 - Use AnimeGANv3 to make your own animation works, including turning photos or videos into anime.
stable-diffusion-webui - Stable Diffusion web UI
PaddleGAN - PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, Wav2Lip, picture repair, image editing, photo2cartoon, image style transfer, GPEN, and so on.
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
Ubuntu-Deep-Learning-Environment-Setup - Guide to installing Tensorflow with NVIDIA GPU and Deep learning enviroment - Nvidia Drivers/cuda/cuDNN/tensorflow-gpu/中文文档
sd-webui-controlnet - WebUI extension for ControlNet
TensorFlow-object-detection-tutorial - The purpose of this tutorial is to learn how to install and prepare TensorFlow framework to train your own convolutional neural network object detection classifier for multiple objects, starting from scratch
arXiv2021-RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation [Moved to: https://github.com/hzwer/ECCV2022-RIFE]
Cartoon-StyleGAN - Fine-tuning StyleGAN2 for Cartoon Face Generation
VideoRenderer - RTX HDR modded into MPC-VideoRenderer.
JoJoGAN - Official PyTorch repo for JoJoGAN: One Shot Face Stylization
txt2mask - Automatically create masks for Stable Diffusion inpainting using natural language.