frame-interpolation
stable-diffusion-webui
Our great sponsors
frame-interpolation | stable-diffusion-webui | |
---|---|---|
74 | 2,808 | |
2,672 | 129,975 | |
3.0% | - | |
0.0 | 9.9 | |
8 months ago | 1 day ago | |
Python | Python | |
Apache License 2.0 | MIT |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
frame-interpolation
-
Aging with AI from age 9 to age 99.
- Lastly I used FILM, an image interpolation library to interpolate between images
-
AnimDiff
1) generate video using https://github.com/camenduru/animatediff 2) upscale using SD-CN https://github.com/volotat/SD-CN-Animation 3) interpolate frames using https://github.com/google-research/frame-interpolation 4) add audio using https://huggingface.co/spaces/suno/bark
-
What is the current best way to make sequence images for animation that keep the art style consistent?
I am aware of interpolation as well (https://github.com/google-research/frame-interpolation) where you give it two images and it generates the images in between to get there but not sure I have good enough images to attempt to use this yet.
-
The AI will make You an Anime in Real Time
Super neat though. With some interpolation (possibly this Google Research one I just found via ChatGPT), it wouldn't be too bad to dump a video in and have it process in the background.
- my older video, without controlnet or training
-
The secret to REALLY easy videos in A1111 (easier than you think)
FILM repo by Google Research, they made this very cool interpolation method, my favourite so far. It's a pain to set up, didn't manage to run it on my local machine, I'm not very smart, and I can't get "pip install tensorflow==2.6.2" to run on my Windows, so can't run the requirements, so can't run the script.. BUTTTT you can use colab here, and once you hook it up to your GDrive, you can change the path to your folder of images, and it will process and spit out the interpolated video for you. I only have free tier, and it took 16 minutes for the sample video.
-
Loopback Wave Workflows (FILM, AE, Flowframes)
FILM (Frame Interpolation for Large Motion)
-
More Loopback Wave + Flow, this time with realistic people
Edit: used this for the interpolation. Flow wasn't the correct word. https://github.com/google-research/frame-interpolation
-
Large Motion Frame Interpolation – Google AI Blog
Also off-topic, but their github.io page has a bibtex snippet for anyone wanting to cite their work in their papers. I'm not an academic, but I still strangely appreciate the gesture.
-
AI Video to Fill Missing Frames/Smooth Animation?
FILM? https://film-net.github.io/
stable-diffusion-webui
-
Show HN: I made an app to use local AI as daily driver
* LLaVA model: I'll add more documentation. You are right Llava could not generate images. For image generation I don't have immediate plans, but checkout these projects for local image generation.
- https://diffusionbee.com/
- https://github.com/comfyanonymous/ComfyUI
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I would love to be able to have a native stable diffusion experience, my rx 580 takes 30s to generate a single image. But it does work after following https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki...
I got this up and running on my windows machine in short order and I don't even know what stable diffusion is.
But again, it would be nice to have first class support to locally participate in the fun.
-
Ask HN: What is the state of the art in AI photo enhancement?
In Auto1111, that just uses Image.blend. :)
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob...
- How To Increase Performance Time on MacOS
-
Can anyone suggest an AI model that can help me enhance a poorly drawn logo?
I used SDXL in automatic1111 webui for both images. Now that I think about it, the procedure I described was how I made this one, but the one that looks like an illustration was done in two steps. I used the canny ControlNet as I said for the outer part of the logo to preserve the shape of the fonts, but I had to turn it off for the boot to give SDXL leeway to add detail and make it look more like a boot.
-
Seeking out an experienced and empathetic coding buddy.
That said, please do learn coding and don't get discouraged when somebody says to learn PyTorch or recommends using a Jupiter notebook with no further information on how to translate the skill into images. I would highly recommend some short term goals. Get your feet wet by taking apart the UIs. The comfy API documentation is here and the A1111 API documentation is here. There is a difference in completeness, welcome to programming. Writing nodes or plugins is also a good way to jump into this world. Custom wildcard logic might be very attractive to you if you aren't the type that want to deal with a nested file structure to simulate logic.
- can't get it working with an AMD gpu
-
SD extension that allows for setting override
Possibly Unprompted? https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8094
- Need to write an application to use Stable Diffusion on my desktop PC - which resource should I learn to use?
-
4090 Speed Decrease on each Generation/Iteration
version: v1.6.1 • python: 3.10.13 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2 • checkpoint: 6e8d4871f8
What are some alternatives?
ebsynth - Fast Example-based Image Synthesis and Style Transfer
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
AnimeInterp - The code for CVPR21 paper "Deep Animation Video Interpolation in the Wild"
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
sd-webui-mov2mov - This is the Mov2mov plugin for Automatic1111/stable-diffusion-webui.
SHARK - SHARK - High Performance Machine Learning Distribution
VQGAN-CLIP-Video - Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
optical.flow.demo - A project that uses optical flow and machine learning to detect aimhacking in video clips.
safetensors - Simple, safe way to store and distribute tensors