stable-diffusion-webui
Our great sponsors
stable-diffusion | stable-diffusion-webui | |
---|---|---|
8 | 75 | |
436 | 2,208 | |
- | - | |
0.0 | 9.8 | |
12 months ago | over 1 year ago | |
Python | ||
GNU General Public License v3.0 or later | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
-
DALL·E Now Available Without Waitlist
No, sorry, but there's a whole bunch of one-click things now, I think?
I'm running it on Windows 10 using (a modified version of) https://github.com/bfirsh/stable-diffusion.git and Anaconda to create the environment from their `environment.yaml` (all of which was done using the normal `cmd` shell). Then to use it, I activate that env from `cmd` and switch into cygwin `bash` to run the `txt2img.py` script (because it's easier to script, etc.)
-
How do I save the arguments for images I create when using the terminal? (Apple M1 Pro)
I am using the bfirsh version. And yes, I run "pyhthon scripts/txt2imp.py" to generate an image.
-
Current canonical way to install Stable Diffusion on Apple Silicon?
Specifically regarding the first option above, I see that the procedure clones the repository from: https://github.com/bfirsh/stable-diffusion.git
-
One-Click Install Stable Diffusion GUI App for M1 Mac. No Dependencies Needed
Just done a run on my 3080 under Windows using https://github.com/bfirsh/stable-diffusion.git and it's about 8 iterations/sec when nothing else is using CPU or GPU.
-
Using the same seed and same prompt is still resulting in two different images?
I've cloned this repository on my M1 Mac: https://github.com/bfirsh/stable-diffusion/tree/apple-silicon-mps-support
-
Run Stable Diffusion on Your M1 Mac’s GPU
Boom - nice. Here's a fork with that: https://github.com/bfirsh/stable-diffusion/tree/lstein
Requirements are "requirements-mac.txt" which'll need subbing in the guide.
We're testing this out with a few people in Discord before shipping to the blog post.
stable-diffusion-webui
- [Stablediffusion] Interface utilisateur Web Diffusion stable
- Generating game concept art
-
../../workspace/imgs/txt2img
I am using this one : https://github.com/hlky/stable-diffusion-webui
-
How to generate a similar images to an input image *without* a prompt?
Not sure about the script but you can try using this web-ui's img2img tab.
-
Enhancing local detail and cohesion by mosaicing
https://github.com/hlky/stable-diffusion-webui now redirects to /sd-webui/stable-diffusion-webui, as though they're the "true" sd-webui.
-
reintalled new Hlky update & img2img returns errors (not just where you have to click on mask & back on crop)
As an update, in case anyone else has the issue, after getting some help (thanks u/vedroboev) I installed from here not sure what the difference is, but I got it working.
- Is anyone else unable to use the site?
-
Fixing SD images with img2img, am I misunderstanding the concept?
I would pick a version on the github from 8/31 in the stable diffusion repo and then follow step 2a in this guide https://rentry.org/GUItard to transfer the files from this https://github.com/hlky/stable-diffusion-webui/tree/96aba4b36d59803f3817ee60e96a097f54962ae4
-
Can't seem to get img2img up and running
This is a bug with the newest UI version. See this.
- Stable Diffusion Img2Img Help
What are some alternatives?
stable_diffusion.openvino
GFPGAN - GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
onnx - Open standard for machine learning interoperability
sd-webui-colab - A repo for the maintenance of the Colab version of stable-diffusion-webui repo
waifu-diffusion - stable diffusion finetuned on weeb stuff
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
diffusers-uncensored - Uncensored fork of diffusers
stable-diffusion - A latent text-to-image diffusion model
txt2imghd - A port of GOBIG for Stable Diffusion
invisible-watermark - python library for invisible image watermark (blind image watermark)
taming-transformers - Taming Transformers for High-Resolution Image Synthesis