stable-diffusion-webui
stable-diffusion-webui-feature-showcase
stable-diffusion-webui | stable-diffusion-webui-feature-showcase | |
---|---|---|
104 | 33 | |
5,487 | 968 | |
- | - | |
10.0 | 0.0 | |
over 1 year ago | 7 months ago | |
Python | ||
GNU Affero General Public License v3.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui
-
[Stable Diffusion] Je suis confus Aide? - Comment utilisez-vous LDSR avec SD-Webui?
[https://github.com/sd-webui/stable-diffusion-webui/wiki/installation de numéro(https://github.com/sd-webui/stable-diffusion-webui/wiki/installation)
-
[Stable Diffusion] Quelle est la meilleure interface graphique à installer sur Windows?
https://github.com/sd-webui/stable-diffusion-webui (prend beaucoup à installer)
- Daily General Discussion - October 21, 2022
-
Most popular IA to animate?
you can "animate" with stable diffusion usining text to video https://github.com/nateraw/stable-diffusion-videos or https://github.com/sd-webui/stable-diffusion-webui
-
Automatic1111 removed from pinned guide.
I mentioned Automatic1111 on SD-WEBUI and they deleted the comment. I guess this is why. My installation failed on SD-WEBUI and there was no solution for me. I suspect that's why Automatic1111's fork is so popular. He went above and beyond to make sure people with 1660ti's could run SD flawlessly with all the different tools available.
-
.pt to .ckpt
Any way to convert a .pt model to a .ckpt model? Stable-diffusion-webui only seems to support the second type of file but just renaming them does not work:
-
Flooded district by AI
This is Stable-Diffusion. Here is a UI version https://github.com/sd-webui/stable-diffusion-webui
-
AI image generated using the prompt "Streets of Dunwall"
I dunno about the app. I use this https://github.com/sd-webui/stable-diffusion-webui it's very resource hungry though.
-
NMKD Stable Diffusion GUI 1.5.0 is out! Now with exclusion words, CodeFormer face restoration, model merging and pruning tool, even lower VRAM requirements (4 GB), and a ton of quality-of-life improvements. Details in comments.
Haven't tried this GUI yet. Can anyone chime in about how it compares to Automatic1111's and sd-webui/HLKY's? There are so many good repos out there that it's getting hard to keep track of them all
-
Someone just joined 11 GPUs to the Stable Horde. I just tested: 20 gens @ 1024x1024x50 in 2 minutes! All for free!
Maybe those who joined were not aware that they joined the horde :-)
stable-diffusion-webui-feature-showcase
- How to turn anime image to realistic image in stable diffusion?
- [Stable Diffusion] inversion textuelle avec AUTOMATIC1111 webui
- [Ainudes] Comment créer des nus IA ?
- Is there any documentation for Automatic1111 WebUI?
-
Is there a properly comprehensive guide on prompt syntax?
A1111 https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase
-
Which one is the "official" version
Here's a quick rundown on a few of the most popular ones with links. I started out using CMDR2, which is very easy to get running as a newbie. Then I kind of graduated to NMKD because I wanted something a little more mainstream but still easy to use. Then, I finally decided I was hungry for all the strange and exotic bells and whistles that SD had to offer me, and so I installed Automatic1111. I also wanted something that would work well with my 4GB GTX 1650 laptop card, because that's considered "low ram" and kind of on the edge for running SD- Automatic1111 fit the bill there, too.
-
At your service...
All generations were on the "Berry's Mix" model, which is made by combining NAI-final, Zenith's F111, r34 and SD1.4 according to this recipe. I used 30ish steps when generating images and inpainting, but 70-80 steps when outpainting because I read here that outpainting really benefits from extra steps. When outpainting I would generate 2-4 versions and pick the least broken one, then tidy up with inpainting.
-
What's the name of this feature?
sounds like "outpainting", one of the very first features listed on 1111 repo with some instructions: https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase
- How do you expand an image? (image to image)
-
Running neural networks locally.
I have no idea what you're talking about. Just get Automatic1111
What are some alternatives?
diffusers-uncensored - Uncensored fork of diffusers
CogVideo - Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
onnx - Open standard for machine learning interoperability
glid-3-xl-stable - stable diffusion training
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
rocm-build - build scripts for ROCm
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
waifu-diffusion - stable diffusion finetuned on weeb stuff
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.