deforum-for-automatic1111-webui
sd-multi
Our great sponsors
deforum-for-automatic1111-webui | sd-multi | |
---|---|---|
55 | 7 | |
1,214 | 24 | |
- | - | |
9.7 | 2.4 | |
12 months ago | 9 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | The Unlicense |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
deforum-for-automatic1111-webui
-
AI-xited My latest addition to the AI scene. Enjoy!
Deforum https://github.com/deforum-art/deforum-for-automatic1111-webui
-
How would someone create this AI art video?
might be deforum (project) (webui extension) or img2img loopback wave (webui extension). it's definitely something with img2img though. that's clear from how it's warping.
- Amsterdam trip) Smoking stable diffusion and drinking deforum)
- AI Video using SD and Deforum (Watch in 4k)
-
Tutorial for animate images in Automatic1111 not using (directly) Deforum. Step by step in comments.
Thanks for the workflow. To simplify the process I created a PR that hopefully will make it a bit faster, specially if you're working in rented cloud GPU's.
-
I make music videos and I'm pretty proud of this one
The sequence after the train with the flashing background is the only exception (1:03 to 1:24), that part was made using the Deforum notebook: https://deforum.github.io/
-
Experimenting with my temporal-coherence script for a1111
ive gotten better results applying some color correction to my loops using the same way as they do in deforum https://github.com/deforum-art/deforum-for-automatic1111-webui/blob/automatic1111-webui/scripts/deforum_helpers/colors.py
-
"The Path" | Deforum Animation (Stable Diffusion v2.1) [8K] - Workflow Included
🔸 Deforum extension for Automatic1111 (Local Install): https://github.com/deforum-art/deforum-for-automatic1111-webui
- Did you know that Windows has free Paint3D built into it? You can use it with ControlNet depth model
-
Bad Apple, but it's rendered and colorized with ControlNet
Deforum plugin settings files: render, coloration. The black and white canny edges + scribble-mode mid-stage rendered video. You can check the prompts keyframing file to see if I got the characters names right. Despite my best efforts, some of the characters appearances were off, I think it was either the character names were incorrectly spelled (I was referring to the Touhou wiki) or they had been underrepresented in the dataset or there was overlap between their naming and some other franchises. There is also the issue of random unrelated characters appearing when the prompt contains no characters and instead is purely abstract, I guess it relates to this specific model retraining.
sd-multi
-
sd-multi update: better downloading, ability to switch between SD 1.4 and 1.5, lama-cleaner added
The sd-multi "meta-fork", which allows running multiple Stable Diffusion systems easily from one place via Docker, has been updated with a few cool additions and changes!
-
First full music video with Deforum 0.5 (single render)
It's running on python anyway, so it's somewhat platform agnostic. You can run "it" on a server https://www.ni-sp.com/how-to-run-deforum-stable-diffusion-on-your-own-cloud-gpu-server/ Aws link https://aws.amazon.com/marketplace/pp/prodview-j557wovfkxxbk or dockerize it https://github.com/arktronic/sd-multi
-
The official Deforum script for 2D/3D Stable Diffusion animations is now also an *extension* for AUTOMATIC1111's WebUI, with its own tab and better UX! (but still in beta)
Awesome, that's much more discoverable and convenient than the script! Just updated my meta-fork to include this update 😀
-
sd-multi updates: more forks, included AUTOMATIC1111 scripts, screenshots with cats
A little over a week ago I created sd-multi, a "meta-fork" to make it easier to try out different Stable Diffusion forks/frontends/etc. using either WSL2 or native Linux, with Docker.
- Stable Diffusion links from around October 9, 2022 that I collected for further processing
-
InvokeAI 2.0.0 - A Stable Diffusion Toolkit is released
Whoa, that web UI looks like a huge improvement from the previous one! Nicely done! I'll update my meta-fork to this version soon.
- I made a "meta-fork" for easily trying out different SD forks and switching between them
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
Txt2Vectorgraphics - Custom Script for Automatics1111 StableDiffusion-WebUI.
sd-civitai-browser - An extension to help download models from CivitAi without leaving WebUI
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
stable-diffusion-webui-colab - stable diffusion webui colab
Dreambooth-SD-optimized - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
sd-webui-additional-networks
stable-diffusion-webui-images-browser - an images browse for stable-diffusion-webui
IOPaint - Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.
wslg - Enabling the Windows Subsystem for Linux to include support for Wayland and X server related scenarios