stable-diffusion-webui-feature-showcase
diffusion-ui
stable-diffusion-webui-feature-showcase | diffusion-ui | |
---|---|---|
33 | 11 | |
975 | 139 | |
- | - | |
0.0 | 5.1 | |
7 months ago | about 2 months ago | |
Jupyter Notebook | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-feature-showcase
- How to turn anime image to realistic image in stable diffusion?
- [Stable Diffusion] inversion textuelle avec AUTOMATIC1111 webui
- [Ainudes] Comment créer des nus IA ?
- Is there any documentation for Automatic1111 WebUI?
-
Is there a properly comprehensive guide on prompt syntax?
A1111 https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase
-
Which one is the "official" version
Here's a quick rundown on a few of the most popular ones with links. I started out using CMDR2, which is very easy to get running as a newbie. Then I kind of graduated to NMKD because I wanted something a little more mainstream but still easy to use. Then, I finally decided I was hungry for all the strange and exotic bells and whistles that SD had to offer me, and so I installed Automatic1111. I also wanted something that would work well with my 4GB GTX 1650 laptop card, because that's considered "low ram" and kind of on the edge for running SD- Automatic1111 fit the bill there, too.
-
At your service...
All generations were on the "Berry's Mix" model, which is made by combining NAI-final, Zenith's F111, r34 and SD1.4 according to this recipe. I used 30ish steps when generating images and inpainting, but 70-80 steps when outpainting because I read here that outpainting really benefits from extra steps. When outpainting I would generate 2-4 versions and pick the least broken one, then tidy up with inpainting.
-
What's the name of this feature?
sounds like "outpainting", one of the very first features listed on 1111 repo with some instructions: https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase
- How do you expand an image? (image to image)
-
Running neural networks locally.
I have no idea what you're talking about. Just get Automatic1111
diffusion-ui
-
Who needs Photoshop generative AI when we have AUTO1111?
You can use diffusion-ui on top of automatic1111 for easy outpainting. Run automatic1111 with --cors-allow-origins=http://127.0.0.1:5173,https://diffusionui.com Then select the automatic1111 backend, upload an image then use the mouse scroll to zoom out.
-
Show HN: InvokeAI, an open source Stable Diffusion toolkit and WebUI
I see you've got my https://github.com/leszekhanusz/diffusion-ui gui but it seems to be linked to a completely unrelated face-swapping interface?
-
Anyone using the stable-diffusion-webui repo by automatic1111 as an API?
Yes, checkout diffusion-ui
-
DiffusionUI responsive frontend working with the automatic1111 fork
I modified my GUI Stable Diffusion frontend to be able to use the automatic1111 fork as a backend.
-
Can we please make a general update on all the "most important" news/repos available?
It's not really well known at the moment but you also have my DiffusionUI interface which is based on diffusers with a backend where you can disable the nsfw filter.
- Show HN: Guided inpainting with Stable Diffusion using DiffusionUI
-
Presenting DiffusionUI, a web GUI for Stable Diffusion backends [P]
GitHub: https://github.com/leszekhanusz/diffusion-ui
-
SD in-painting hair test
One way to do it is to use diffusionui, a web interface I made in vue. Here is a crappy video demo showing the process.
-
What's the best install of Stable Diffusion right now?
If you want to do inpainting, you can try my diffusionui interface I'll add a feature soon to save the generated images in local storage in the browser. Here is a small demo
-
Having fun with my proof-of-concept of a web interface to do directed inpainting with stable-diffusion
It's still a work in progress but you can already play with it. Check it on GitHub
What are some alternatives?
CogVideo - Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
glid-3-xl-stable - stable diffusion training
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
stable-diffusion-webui - Stable Diffusion web UI
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
diffusion-ui-backend - Backend for the diffusion-ui frontend
diffusers-interpret - Diffusers-Interpret 🤗🧨🕵️♀️: Model explainability for 🤗 Diffusers. Get explanations for your generated images.