dain-ncnn-vulkan
stable-diffusion-videos
dain-ncnn-vulkan | stable-diffusion-videos | |
---|---|---|
9 | 17 | |
496 | 4,234 | |
- | - | |
0.0 | 2.0 | |
6 months ago | 12 months ago | |
C | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dain-ncnn-vulkan
-
[Tech Support] Comment exécuter Dain (logiciel d'animation d'interpolation) sur Mac
Lien vers les instructions [https://github.com/nihui/dain-ncnn-vulkan/blob/master/readme.md de Readme.md)
- Stable Diffusion on AMD RDNA™ 3 Architecture
-
[Help] Looking for a command-line implementation of ai video upscaling, preferably FOSS
I also found this and this, which FlowFrames seems to be based on. These would probably help with the frame extrapolation and increasing, but won't help me with the upscaling.
-
Short interpolation animation between several frames?
not SD based, but maybe this helps? https://github.com/nihui/dain-ncnn-vulkan https://github.com/nihui/rife-ncnn-vulkan
- Are you guys interested in a vid2vid?
- Film: Frame Interpolation for Large Motion
-
[Question] How do I turn an MP4 video into 34,416 png frames / pictures.
I can recommend DAIN for the 60 fps: https://github.com/nihui/dain-ncnn-vulkan
-
How to run DAIN (interpolation animation software) on mac
link to the instructions https://github.com/nihui/dain-ncnn-vulkan/blob/master/README.md
-
Upscaled Anime
For AMD. But it takes forever to even do a few seconds on AMD for me (5700 XT; and Nvidia GPUs are impossible to get..). And NVIDIA. Generally you split the video in single frames; interpolate them with the DAIN AI Models (Idk which model is best for animation) and the images that the model created are then re-rendered to a video with ffmpeg.
stable-diffusion-videos
- How to create it?
-
Stable Diffusion Text-to-Video WebUI
Main Code: https://github.com/nateraw/stable-diffusion-videos/
-
Messing with the denoising loop can allow you to reach new places in latent space. Over 8+ different research papers/Auto1111 extension ideas in a single pipe. Load once and do lots of different things (SD 2.1 or 1.5)
So I've continued to experiment with how many papers I can fit into a single pipe and have them play nicely together. The images below were created by combining the panorama code from omerbt/MultiDiffusion with the ideas from albarji/mixture-of-diffusers. Also turns out nateraw/stable-diffusion-videos can be seen as a special case of a panorama (in latent space rather than prompt space).
-
Comparison of new UniPC sampler method added to Automatic1111
https://huggingface.co/spaces/tomg-group-umd/pez-dispenser https://huggingface.co/spaces/AIML-TUDA/safe-stable-diffusion https://huggingface.co/spaces/AIML-TUDA/semantic-diffusion https://github.com/nateraw/stable-diffusion-videos
-
Start Frame -> Stable Diffusion + Linear Interpolation -> End Frame
The goal is to make a (short) video out of a given first and last frame. It is similar to what this guy does (https://github.com/nateraw/stable-diffusion-videos (7sec example video half way down page)). But instead of starting and ending with a prompt, I want to start and end with 2 different frames.
-
Stable Diffusion Videos Easy-to-Use Playground & Competition This Week
Hey Y'all! We've been working on a tool that extends Nate Raw's Stable Diffusion Videos repo and makes it as easy as possible to use for artists and are having a competition this week to stress test the beta and see who can use it to make the most compelling short video (40 seconds max)
- Create videos with Stablediffusion. Saw this project and thought someone here might like it.
-
Tried to pull off an ultra smooth video where you don't realize the scenes are changing until after-the-fact so I could make an 8hr background video that won't give seizures
Of course! There might be a better process but mainly used: 1.) Nate Raw's repo for morphing between prompts https://github.com/nateraw/stable-diffusion-videos 2.) Google FILM interpolation to smooth out transitions https://github.com/google-research/frame-interpolation
-
[video] Packed underground rave in North Korea with dj ill kim headlining
There are directions in the readme and an example script.
-
Short interpolation animation between several frames?
This does exactly that - https://github.com/nateraw/stable-diffusion-videos
What are some alternatives?
rife-ncnn-vulkan - RIFE, Real-Time Intermediate Flow Estimation for Video Frame Interpolation implemented with ncnn library
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
Anime4K - A High-Quality Real Time Upscaler for Anime Video
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
waifu2x-ncnn-vulkan - waifu2x converter ncnn version, runs fast on intel / amd / nvidia / apple-silicon GPU with vulkan
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
Dain-App - Source code for Dain-App
stable-karlo - Upscaling Karlo text-to-image generation using Stable Diffusion v2.
realsr-ncnn-vulkan - RealSR super resolution implemented with ncnn library
stable-diffusion-tensorflow-IntelMetal - Stable Diffusion in TensorFlow / Keras, Designed for Apple Metal on Intel. Forked from @divamgupta's work [Moved to: https://github.com/soten355/MetalDiffusion]
srmd-ncnn-vulkan - SRMD super resolution implemented with ncnn library
Video-Diffusion-WebUI - Video Diffusion WebUI: Text2Video + Image2Video + Video2Video WebUI