dain-ncnn-vulkan
frame-interpolation
dain-ncnn-vulkan | frame-interpolation | |
---|---|---|
9 | 74 | |
496 | 2,672 | |
- | 1.8% | |
0.0 | 0.0 | |
6 months ago | 9 months ago | |
C | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dain-ncnn-vulkan
-
[Tech Support] Comment exécuter Dain (logiciel d'animation d'interpolation) sur Mac
Lien vers les instructions [https://github.com/nihui/dain-ncnn-vulkan/blob/master/readme.md de Readme.md)
- Stable Diffusion on AMD RDNA™ 3 Architecture
-
[Help] Looking for a command-line implementation of ai video upscaling, preferably FOSS
I also found this and this, which FlowFrames seems to be based on. These would probably help with the frame extrapolation and increasing, but won't help me with the upscaling.
-
Short interpolation animation between several frames?
not SD based, but maybe this helps? https://github.com/nihui/dain-ncnn-vulkan https://github.com/nihui/rife-ncnn-vulkan
- Are you guys interested in a vid2vid?
- Film: Frame Interpolation for Large Motion
-
[Question] How do I turn an MP4 video into 34,416 png frames / pictures.
I can recommend DAIN for the 60 fps: https://github.com/nihui/dain-ncnn-vulkan
-
How to run DAIN (interpolation animation software) on mac
link to the instructions https://github.com/nihui/dain-ncnn-vulkan/blob/master/README.md
-
Upscaled Anime
For AMD. But it takes forever to even do a few seconds on AMD for me (5700 XT; and Nvidia GPUs are impossible to get..). And NVIDIA. Generally you split the video in single frames; interpolate them with the DAIN AI Models (Idk which model is best for animation) and the images that the model created are then re-rendered to a video with ffmpeg.
frame-interpolation
-
Aging with AI from age 9 to age 99.
- Lastly I used FILM, an image interpolation library to interpolate between images
-
AnimDiff
1) generate video using https://github.com/camenduru/animatediff 2) upscale using SD-CN https://github.com/volotat/SD-CN-Animation 3) interpolate frames using https://github.com/google-research/frame-interpolation 4) add audio using https://huggingface.co/spaces/suno/bark
-
What is the current best way to make sequence images for animation that keep the art style consistent?
I am aware of interpolation as well (https://github.com/google-research/frame-interpolation) where you give it two images and it generates the images in between to get there but not sure I have good enough images to attempt to use this yet.
-
The AI will make You an Anime in Real Time
Super neat though. With some interpolation (possibly this Google Research one I just found via ChatGPT), it wouldn't be too bad to dump a video in and have it process in the background.
- my older video, without controlnet or training
-
The secret to REALLY easy videos in A1111 (easier than you think)
FILM repo by Google Research, they made this very cool interpolation method, my favourite so far. It's a pain to set up, didn't manage to run it on my local machine, I'm not very smart, and I can't get "pip install tensorflow==2.6.2" to run on my Windows, so can't run the requirements, so can't run the script.. BUTTTT you can use colab here, and once you hook it up to your GDrive, you can change the path to your folder of images, and it will process and spit out the interpolated video for you. I only have free tier, and it took 16 minutes for the sample video.
-
Loopback Wave Workflows (FILM, AE, Flowframes)
FILM (Frame Interpolation for Large Motion)
-
More Loopback Wave + Flow, this time with realistic people
Edit: used this for the interpolation. Flow wasn't the correct word. https://github.com/google-research/frame-interpolation
-
Large Motion Frame Interpolation – Google AI Blog
Also off-topic, but their github.io page has a bibtex snippet for anyone wanting to cite their work in their papers. I'm not an academic, but I still strangely appreciate the gesture.
-
AI Video to Fill Missing Frames/Smooth Animation?
FILM? https://film-net.github.io/
What are some alternatives?
rife-ncnn-vulkan - RIFE, Real-Time Intermediate Flow Estimation for Video Frame Interpolation implemented with ncnn library
ebsynth - Fast Example-based Image Synthesis and Style Transfer
Anime4K - A High-Quality Real Time Upscaler for Anime Video
AnimeInterp - The code for CVPR21 paper "Deep Animation Video Interpolation in the Wild"
waifu2x-ncnn-vulkan - waifu2x converter ncnn version, runs fast on intel / amd / nvidia / apple-silicon GPU with vulkan
sd-webui-mov2mov - This is the Mov2mov plugin for Automatic1111/stable-diffusion-webui.
Dain-App - Source code for Dain-App
VQGAN-CLIP-Video - Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.
realsr-ncnn-vulkan - RealSR super resolution implemented with ncnn library
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
srmd-ncnn-vulkan - SRMD super resolution implemented with ncnn library
optical.flow.demo - A project that uses optical flow and machine learning to detect aimhacking in video clips.