diffusion-ui
glid-3-xl-stable
diffusion-ui | glid-3-xl-stable | |
---|---|---|
11 | 20 | |
139 | 286 | |
- | - | |
4.6 | 0.0 | |
26 days ago | over 1 year ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
diffusion-ui
-
Who needs Photoshop generative AI when we have AUTO1111?
You can use diffusion-ui on top of automatic1111 for easy outpainting. Run automatic1111 with --cors-allow-origins=http://127.0.0.1:5173,https://diffusionui.com Then select the automatic1111 backend, upload an image then use the mouse scroll to zoom out.
-
Show HN: InvokeAI, an open source Stable Diffusion toolkit and WebUI
I see you've got my https://github.com/leszekhanusz/diffusion-ui gui but it seems to be linked to a completely unrelated face-swapping interface?
-
Anyone using the stable-diffusion-webui repo by automatic1111 as an API?
Yes, checkout diffusion-ui
-
DiffusionUI responsive frontend working with the automatic1111 fork
I modified my GUI Stable Diffusion frontend to be able to use the automatic1111 fork as a backend.
-
Can we please make a general update on all the "most important" news/repos available?
It's not really well known at the moment but you also have my DiffusionUI interface which is based on diffusers with a backend where you can disable the nsfw filter.
- Show HN: Guided inpainting with Stable Diffusion using DiffusionUI
-
Presenting DiffusionUI, a web GUI for Stable Diffusion backends [P]
GitHub: https://github.com/leszekhanusz/diffusion-ui
-
SD in-painting hair test
One way to do it is to use diffusionui, a web interface I made in vue. Here is a crappy video demo showing the process.
-
What's the best install of Stable Diffusion right now?
If you want to do inpainting, you can try my diffusionui interface I'll add a feature soon to save the generated images in local storage in the browser. Here is a small demo
-
Having fun with my proof-of-concept of a web interface to do directed inpainting with stable-diffusion
It's still a work in progress but you can already play with it. Check it on GitHub
glid-3-xl-stable
-
New inpainting model from RunwayML out
I don't know how you can say this but it's completely different than anything we had before, the only exception was https://github.com/Jack000/glid-3-xl-stable/wiki/Custom-inpainting-model, this model was a finituned version of v1.4 but not having a separate channel for the original image and the mask makes it weaker.
-
Local inpainting/outpainting GUIs/Programs?
Check out the item by lkwq007 in this list https://www.reddit.com/r/StableDiffusion/comments/wqaizj/list_of_stable_diffusion_systems/ , and also the model for this web app https://replicate.com/devxpy/glid-3-xl-stable , which I believe is this https://github.com/Jack000/glid-3-xl-stable .
-
I'm building my own image editor using canvas and Stable Diffusion AI model
Right now I am using different, better optimized model for just outpaiting/inpating using this https://github.com/Jack000/glid-3-xl-stable as base
-
getimg.ai - I've made outpainting/inpainting editor publicly available
I'm using a slightly modified and optimized version of https://github.com/Jack000/glid-3-xl-stable for inpainting/outpainting.
-
Inpainting/outpainting webapp UI with actually good inpainting capabilities, mobile support & more (using glid-3-xl-sd custom inpainting model) - patience.ai update
For this editor we've integrated Jack Qiao's excellent custom inpainting model from the glid-3-xl-sd project instead. This is a fine-tuned version of Stable Diffusion with significantly better inpainting capabilities than standard SD. You can read more about how it works here along with comparison images between it and regular SD.
-
Out/Inpainting Specialized Model (Jack's)
you cant. they are different architectures: https://github.com/Jack000/glid-3-xl-stable/issues/17
-
[Update] stablediffusion-infinity now becomes a web app with better UI (outpainting with Stable Diffusion on an infinite canvas)
I am wondering tho, if this one uses https://github.com/Jack000/glid-3-xl-stable/wiki/Custom-inpainting-model glid-3 inpainting?
- Will Stable Diffusion ever gain a better inpainting feature on par with Dalle, or is this a fundamental difference?
- Stable Diffusion, custom in/outpainting model
-
Progress on getimg.ai - outpainting prototype and other updates
(also check this custom SD inpainting/outpainting model, it's easily the best i've seen https://github.com/Jack000/glid-3-xl-stable/wiki/Custom-inpainting-model)
What are some alternatives?
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui-feature-showcase - Feature showcase for stable-diffusion-webui
diffusers - ๐ค Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
stable-diffusion - Latent Text-to-Image Diffusion
diffusion-ui-backend - Backend for the diffusion-ui frontend
awesome-stable-diffusion - Curated list of awesome resources for the Stable Diffusion AI Model.
diffusers-interpret - Diffusers-Interpret ๐ค๐งจ๐ต๏ธโโ๏ธ: Model explainability for ๐ค Diffusers. Get explanations for your generated images.
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]