stable-diffusion-videos
stable-diffusion-webui
stable-diffusion-videos | stable-diffusion-webui | |
---|---|---|
17 | 104 | |
4,234 | 5,487 | |
- | - | |
2.0 | 10.0 | |
about 1 year ago | over 1 year ago | |
Python | Python | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-videos
- How to create it?
-
Stable Diffusion Text-to-Video WebUI
Main Code: https://github.com/nateraw/stable-diffusion-videos/
-
Messing with the denoising loop can allow you to reach new places in latent space. Over 8+ different research papers/Auto1111 extension ideas in a single pipe. Load once and do lots of different things (SD 2.1 or 1.5)
So I've continued to experiment with how many papers I can fit into a single pipe and have them play nicely together. The images below were created by combining the panorama code from omerbt/MultiDiffusion with the ideas from albarji/mixture-of-diffusers. Also turns out nateraw/stable-diffusion-videos can be seen as a special case of a panorama (in latent space rather than prompt space).
-
Comparison of new UniPC sampler method added to Automatic1111
https://huggingface.co/spaces/tomg-group-umd/pez-dispenser https://huggingface.co/spaces/AIML-TUDA/safe-stable-diffusion https://huggingface.co/spaces/AIML-TUDA/semantic-diffusion https://github.com/nateraw/stable-diffusion-videos
-
Start Frame -> Stable Diffusion + Linear Interpolation -> End Frame
The goal is to make a (short) video out of a given first and last frame. It is similar to what this guy does (https://github.com/nateraw/stable-diffusion-videos (7sec example video half way down page)). But instead of starting and ending with a prompt, I want to start and end with 2 different frames.
-
Stable Diffusion Videos Easy-to-Use Playground & Competition This Week
Hey Y'all! We've been working on a tool that extends Nate Raw's Stable Diffusion Videos repo and makes it as easy as possible to use for artists and are having a competition this week to stress test the beta and see who can use it to make the most compelling short video (40 seconds max)
- Create videos with Stablediffusion. Saw this project and thought someone here might like it.
-
Tried to pull off an ultra smooth video where you don't realize the scenes are changing until after-the-fact so I could make an 8hr background video that won't give seizures
Of course! There might be a better process but mainly used: 1.) Nate Raw's repo for morphing between prompts https://github.com/nateraw/stable-diffusion-videos 2.) Google FILM interpolation to smooth out transitions https://github.com/google-research/frame-interpolation
-
[video] Packed underground rave in North Korea with dj ill kim headlining
There are directions in the readme and an example script.
-
Short interpolation animation between several frames?
This does exactly that - https://github.com/nateraw/stable-diffusion-videos
stable-diffusion-webui
-
[Stable Diffusion] Je suis confus Aide? - Comment utilisez-vous LDSR avec SD-Webui?
[https://github.com/sd-webui/stable-diffusion-webui/wiki/installation de numéro(https://github.com/sd-webui/stable-diffusion-webui/wiki/installation)
-
[Stable Diffusion] Quelle est la meilleure interface graphique à installer sur Windows?
https://github.com/sd-webui/stable-diffusion-webui (prend beaucoup à installer)
- Daily General Discussion - October 21, 2022
-
Most popular IA to animate?
you can "animate" with stable diffusion usining text to video https://github.com/nateraw/stable-diffusion-videos or https://github.com/sd-webui/stable-diffusion-webui
-
Automatic1111 removed from pinned guide.
I mentioned Automatic1111 on SD-WEBUI and they deleted the comment. I guess this is why. My installation failed on SD-WEBUI and there was no solution for me. I suspect that's why Automatic1111's fork is so popular. He went above and beyond to make sure people with 1660ti's could run SD flawlessly with all the different tools available.
-
.pt to .ckpt
Any way to convert a .pt model to a .ckpt model? Stable-diffusion-webui only seems to support the second type of file but just renaming them does not work:
-
Flooded district by AI
This is Stable-Diffusion. Here is a UI version https://github.com/sd-webui/stable-diffusion-webui
-
AI image generated using the prompt "Streets of Dunwall"
I dunno about the app. I use this https://github.com/sd-webui/stable-diffusion-webui it's very resource hungry though.
-
NMKD Stable Diffusion GUI 1.5.0 is out! Now with exclusion words, CodeFormer face restoration, model merging and pruning tool, even lower VRAM requirements (4 GB), and a ton of quality-of-life improvements. Details in comments.
Haven't tried this GUI yet. Can anyone chime in about how it compares to Automatic1111's and sd-webui/HLKY's? There are so many good repos out there that it's getting hard to keep track of them all
-
Someone just joined 11 GPUs to the Stable Horde. I just tested: 20 gens @ 1024x1024x50 in 2 minutes! All for free!
Maybe those who joined were not aware that they joined the horde :-)
What are some alternatives?
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
diffusers-uncensored - Uncensored fork of diffusers
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
onnx - Open standard for machine learning interoperability
dain-ncnn-vulkan - DAIN, Depth-Aware Video Frame Interpolation implemented with ncnn library
stable-diffusion-webui - Stable Diffusion web UI
stable-karlo - Upscaling Karlo text-to-image generation using Stable Diffusion v2.
rocm-build - build scripts for ROCm
stable-diffusion-tensorflow-IntelMetal - Stable Diffusion in TensorFlow / Keras, Designed for Apple Metal on Intel. Forked from @divamgupta's work [Moved to: https://github.com/soten355/MetalDiffusion]
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
Video-Diffusion-WebUI - Video Diffusion WebUI: Text2Video + Image2Video + Video2Video WebUI
waifu-diffusion - stable diffusion finetuned on weeb stuff