depthmap2mask
auto-sd-paint-ext
depthmap2mask | auto-sd-paint-ext | |
---|---|---|
26 | 27 | |
352 | 471 | |
- | - | |
2.7 | 3.7 | |
about 1 year ago | 5 months ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
depthmap2mask
- Jessica Rabbit | Toon integration test
- Is there a Chroma Key embedding anywhere?
-
StableDiffusion locally, what am i doing wrong ? what settings should i use ? i am using img2img and keep getting these messed up results
for change background i suggest to use depth2mask.
-
Using SD as a green screen?
have you try with dept2mask?
-
Quick test of AI and Blender with camera projection.
Looks really good. Have you tried img2depth for the texturing? GitHub - Extraltodeus/depthmap2mask: Create masks out of depthmaps in img2img
-
Ideas for using SD to automatically enhance photographic portraits without completely distorting the face
have you try https://github.com/Extraltodeus/depthmap2mask ?
-
Deforum: FileNotFoundError: [Errno 2] No such file or directory:
No, and I don't need to. depthmap2mask works sloppy, I don't like it. It's much better to create mask for "Inpainting" using image-editing software. Here you can see how it's done. https://www.youtube.com/watch?v=dnIYTGW1m8w
- flowdas-meta missing from PYPI, can't pip install launch ? Impossible ?
-
The transformation no one asked for
Sent to img2img and used Depth Aware img2img mask with the model set to `midas_v21_small` so that I would hopefully affect as little of the image as possible. (after seeing the pants morph, I think it might have been better to just use inpaint)
- Me waiting for A1111 Depth2img to officially support custom depth maps.
auto-sd-paint-ext
-
Opendream: A Non-Destructive UI for Stable Diffusion
It makes more sense to embed stable diffusion capabilities into well-established image editors such as Gimp, Photoshop, Krita, or Figma, which come with layered, non-destructive functionalities, rather than attempting the opposite approach.
https://github.com/Interpause/auto-sd-paint-ext
-
it's so convenient
You might be interested in knowing that Krita has a SD plug in that I've found works quite well if you're willing and able to install and run SD locally. Krita is now my primary art program. https://github.com/Interpause/auto-sd-paint-ext
- Adobe just added generative AI capabilities to Photoshop 🤯
-
Why am I sharing the extensive list of alternatives to Adobe products? No reason... (I strongly personally recommend Krita.)
GitHub - Interpause/auto-sd-paint-ext: Extension for AUTOMATIC1111 to add custom backend API for Krita Plugin & more
- I've created a simple Gimp plugin that allows you to use Automatic1111's API
-
Major update: Automatic1111 Photoshop Stable Diffusion plugin V1.2.0, ControlNet, One Click Installer and More, Free and Open Source
Yep, Krita has a SD plugin too ( https://github.com/Interpause/auto-sd-paint-ext ) but probably not as good as this Photoshop one yet.
- Just discovered the auto-sd-paint-ext / Krita Extension from web-ui A111
-
If Midjourney runs Stable Diffusion, why is its output better?
Again in my humble opinion, however, it's nice to be able to choose (at least if you're willing to pay the subscription costs) among different models and algorithms. Use whatever suits best your needs or your ideas. I personally prefer SD's versatility, and the workflows enabled by "being able to get my hands on all the parameters", like sampler selection, CFG scale, inpainting and other stuff (have you tried InvokeAI's unified canvas or Automatic1111's Krita plugin?). Not to mention the ability to do progressive and potentially infinite upscaling of an image (Automatic1111's "SD upscale" script or InvokeAI's "Embiggen").
-
Trouble inpainting a subject into a scene in krita using auto-sd-paint-ext
I have a simple question that apparently i cant seem to figure out for the past like 12 hours, im using the https://github.com/Interpause/auto-sd-paint-ext auto extension for the krita plugin, and im simply just trying to inpaint creatures and just general things into a scene, but no matter what i do, the result is nothing like it, i followed the inpainting tutorial they have listed on github (New layer from visible) then make a new layer to paint the mask on, tried all fill methods with different steps, im fairly new to all of this and what not, im using a custom model checkpoint btw if that matters (protogen 5.8)
- Stable.art: open-source photoshop plugin for Automatic1111 (locally or Google Colab!) with integration of Lexica.art prompts
What are some alternatives?
civitai - A repository of models, textual inversions, and more
Auto-Photoshop-StableDiffusion-Plugin - A user-friendly plug-in that makes it easy to generate stable diffusion images inside Photoshop using either Automatic or ComfyUI as a backend.
multi-subject-render - Generate multiple complex subjects all at once!
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
IOPaint - Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.
stable-diffusion-webui - Stable Diffusion web UI
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
3d-photo-inpainting - [CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting
openOutpaint - local offline javascript and html canvas outpainting gizmo for stable diffusion webUI API 🐠
Merge-Stable-Diffusion-models-without-distortion - Adaptation of the merging method described in the paper - Git Re-Basin: Merging Models modulo Permutation Symmetries (https://arxiv.org/abs/2209.04836) for Stable Diffusion
krita_stable_diffusion - A Stable Diffusion plugin for Krita