stable-diffusion-webui-feature-showcase
stable-diffusion
stable-diffusion-webui-feature-showcase | stable-diffusion | |
---|---|---|
33 | 40 | |
975 | 594 | |
- | - | |
0.0 | 0.0 | |
7 months ago | over 1 year ago | |
Jupyter Notebook | ||
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-feature-showcase
- How to turn anime image to realistic image in stable diffusion?
- [Stable Diffusion] inversion textuelle avec AUTOMATIC1111 webui
- [Ainudes] Comment créer des nus IA ?
- Is there any documentation for Automatic1111 WebUI?
-
Is there a properly comprehensive guide on prompt syntax?
A1111 https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase
-
Which one is the "official" version
Here's a quick rundown on a few of the most popular ones with links. I started out using CMDR2, which is very easy to get running as a newbie. Then I kind of graduated to NMKD because I wanted something a little more mainstream but still easy to use. Then, I finally decided I was hungry for all the strange and exotic bells and whistles that SD had to offer me, and so I installed Automatic1111. I also wanted something that would work well with my 4GB GTX 1650 laptop card, because that's considered "low ram" and kind of on the edge for running SD- Automatic1111 fit the bill there, too.
-
At your service...
All generations were on the "Berry's Mix" model, which is made by combining NAI-final, Zenith's F111, r34 and SD1.4 according to this recipe. I used 30ish steps when generating images and inpainting, but 70-80 steps when outpainting because I read here that outpainting really benefits from extra steps. When outpainting I would generate 2-4 versions and pick the least broken one, then tidy up with inpainting.
-
What's the name of this feature?
sounds like "outpainting", one of the very first features listed on 1111 repo with some instructions: https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase
- How do you expand an image? (image to image)
-
Running neural networks locally.
I have no idea what you're talking about. Just get Automatic1111
stable-diffusion
- Stable Diffusion links from around September 12, 2022 that I collected for further processing
- Stable Diffusion links from around September 16, 2022 that I collected for further processing
-
Can't install neonsecret's fork
1. git clone https://github.com/neonsecret/stable-diffusion 2. pip install --upgrade -r requirements.txt 3. conda env create -f environment.yaml
- AI Art: Dantooine Jedi Enclave, Unimaginably cool I can make fanart for any game
-
Please recommend a way to run SD on 4GB Nvidia on Ubuntu
neonsecret's fork is the only one I can get to run on my 4gb GeForce GTX 1050 Ti. I also use OptiomizedSD "just" the optimizedsd scripts folder copied over to neonsecrets. I've never been able to get automatic1111's fork to work for me.
-
Everything has worked flawlessly so far except this command. Any idea as to what the issue might be?
You can also clone neonsecret's version of optimized repository, if you want a better GUI, or use Arki's guide for AUTOMATIC1111's repo, which also has an optimized mode, and is pretty feature-packed.
-
Why can't I use Stable Diffusion?
sd gui
-
The first 4k picture ever produced by neural networks
Hey guys, today I produced the first ever 4k image using this: https://github.com/neonsecret/stable-diffusion/
-
Best GUI overall?
https://github.com/neonsecret/stable-diffusion/ https://github.com/neonsecret/neonpeacasso I have two of those, for both low-end and high-end GPUs
-
Literally 4k (3840x2176)
using https://github.com/neonsecret/stable-diffusion
What are some alternatives?
CogVideo - Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
glid-3-xl-stable - stable diffusion training
stable-diffusion-rocm
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion
stable-diffusion
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]