auto-sd-paint-ext
diffusers
auto-sd-paint-ext | diffusers | |
---|---|---|
27 | 105 | |
472 | 1,873 | |
- | - | |
3.7 | 7.0 | |
5 months ago | 11 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
auto-sd-paint-ext
-
Opendream: A Non-Destructive UI for Stable Diffusion
It makes more sense to embed stable diffusion capabilities into well-established image editors such as Gimp, Photoshop, Krita, or Figma, which come with layered, non-destructive functionalities, rather than attempting the opposite approach.
https://github.com/Interpause/auto-sd-paint-ext
-
it's so convenient
You might be interested in knowing that Krita has a SD plug in that I've found works quite well if you're willing and able to install and run SD locally. Krita is now my primary art program. https://github.com/Interpause/auto-sd-paint-ext
- Adobe just added generative AI capabilities to Photoshop 🤯
-
Why am I sharing the extensive list of alternatives to Adobe products? No reason... (I strongly personally recommend Krita.)
GitHub - Interpause/auto-sd-paint-ext: Extension for AUTOMATIC1111 to add custom backend API for Krita Plugin & more
- I've created a simple Gimp plugin that allows you to use Automatic1111's API
-
Major update: Automatic1111 Photoshop Stable Diffusion plugin V1.2.0, ControlNet, One Click Installer and More, Free and Open Source
Yep, Krita has a SD plugin too ( https://github.com/Interpause/auto-sd-paint-ext ) but probably not as good as this Photoshop one yet.
- Just discovered the auto-sd-paint-ext / Krita Extension from web-ui A111
-
If Midjourney runs Stable Diffusion, why is its output better?
Again in my humble opinion, however, it's nice to be able to choose (at least if you're willing to pay the subscription costs) among different models and algorithms. Use whatever suits best your needs or your ideas. I personally prefer SD's versatility, and the workflows enabled by "being able to get my hands on all the parameters", like sampler selection, CFG scale, inpainting and other stuff (have you tried InvokeAI's unified canvas or Automatic1111's Krita plugin?). Not to mention the ability to do progressive and potentially infinite upscaling of an image (Automatic1111's "SD upscale" script or InvokeAI's "Embiggen").
-
Trouble inpainting a subject into a scene in krita using auto-sd-paint-ext
I have a simple question that apparently i cant seem to figure out for the past like 12 hours, im using the https://github.com/Interpause/auto-sd-paint-ext auto extension for the krita plugin, and im simply just trying to inpaint creatures and just general things into a scene, but no matter what i do, the result is nothing like it, i followed the inpainting tutorial they have listed on github (New layer from visible) then make a new layer to paint the mask on, tried all fill methods with different steps, im fairly new to all of this and what not, im using a custom model checkpoint btw if that matters (protogen 5.8)
- Stable.art: open-source photoshop plugin for Automatic1111 (locally or Google Colab!) with integration of Lexica.art prompts
diffusers
-
Useful Links
ShivamShrirao's Diffusers Pretrained diffusion models across multiple modalities.
-
DreamBooth fine-tuning failing to get the style
Like the title say I'm trying to fine-tune a model to match a style of a popular manhwa. I'm using the ShivamShrirao Google Colab to accomplish this.
-
How to resume Dreambooth training?
I am running the DreamBooth_Stable_Diffusion.ipynb notebook from ShivamShrirao locally on my machine. Let's say I have trained for 500 iterations and it hasn't converged yet. How do I make it resume training from that iteration so it can do another 500?
-
Non web-ui colab
My understanding, based on messages from an (alleged) representative of colabs, is that the webui is the problem, not SD itself. This also seems to be the consensus in the comments section of other posts. I have not yet seen a link to colab based webui alternatives so here is something I found from a tutorial. I am certain that there are better alternatives. Anyone have a better idea? This will still probably be useful to other people like me who are just messing around.
- [Stablediffusion] Guide pour DreamBooth avec 8 Go de vram sous Windows
-
Finally got Dreambooth running without errors... but is it even using the model I trained?
I'm running ShivamShrirao's fork of diffusers; ran into a fp16 issue and had to patch in a fix from the main branch ( #1567 ).
-
Shivam Stable Diffusion: Getting same example models repeatedly (SD + Dreambooth)
I am running Shivam Stable Diffusion Jupyter notebook: diffusers/DreamBooth_Stable_Diffusion.ipynb at main · ShivamShrirao/diffusers · GitHub.
- Running Stable Diffusion locally with personalized changes
- Can't create embedding's with dreambooth ckpt
-
Weird issue using Shivam's Diffuser notebook
Are you using this one? https://github.com/S
What are some alternatives?
Auto-Photoshop-StableDiffusion-Plugin - A user-friendly plug-in that makes it easy to generate stable diffusion images inside Photoshop using either Automatic or ComfyUI as a backend.
stable-diffusion-webui - Stable Diffusion web UI
fast-stable-diffusion - fast-stable-diffusion + DreamBooth
IOPaint - Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
openOutpaint - local offline javascript and html canvas outpainting gizmo for stable diffusion webUI API 🐠
efficient-dreambooth - [Moved to: https://github.com/smy20011/dreambooth-docker]
krita_stable_diffusion - A Stable Diffusion plugin for Krita
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.