depthmap2mask
multi-subject-render
depthmap2mask | multi-subject-render | |
---|---|---|
26 | 18 | |
352 | 359 | |
- | - | |
2.7 | 2.5 | |
about 1 year ago | about 1 year ago | |
Python | Python | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
depthmap2mask
- Jessica Rabbit | Toon integration test
- Is there a Chroma Key embedding anywhere?
-
StableDiffusion locally, what am i doing wrong ? what settings should i use ? i am using img2img and keep getting these messed up results
for change background i suggest to use depth2mask.
-
Using SD as a green screen?
have you try with dept2mask?
-
Quick test of AI and Blender with camera projection.
Looks really good. Have you tried img2depth for the texturing? GitHub - Extraltodeus/depthmap2mask: Create masks out of depthmaps in img2img
-
Ideas for using SD to automatically enhance photographic portraits without completely distorting the face
have you try https://github.com/Extraltodeus/depthmap2mask ?
-
Deforum: FileNotFoundError: [Errno 2] No such file or directory:
No, and I don't need to. depthmap2mask works sloppy, I don't like it. It's much better to create mask for "Inpainting" using image-editing software. Here you can see how it's done. https://www.youtube.com/watch?v=dnIYTGW1m8w
- flowdas-meta missing from PYPI, can't pip install launch ? Impossible ?
-
The transformation no one asked for
Sent to img2img and used Depth Aware img2img mask with the model set to `midas_v21_small` so that I would hopefully affect as little of the image as possible. (after seeing the pants morph, I think it might have been better to just use inpaint)
- Me waiting for A1111 Depth2img to officially support custom depth maps.
multi-subject-render
-
Creating pictures of multiple people with distinct faces
You can use the multi subject renderer https://github.com/Extraltodeus/multi-subject-render.git
-
Can I use SD to generate group pictures (of say, me and my cousin, or me and multiple cousins)?
Get this Extension, and as always, please read the docs to avoid problems.
-
Find it hard to tune my prompt for more than 2 characters
There's also a script/extension https://github.com/Extraltodeus/multi-subject-render but it's fiddily to get work right, and i think the other workflow is faster.
- Textual Inversion: TI TLDR for the Lazy. How to Make Fake People: Simple TI Traning Using 6 Images and very low Settings. Bonus 1: How to Make Fake People that Look Like Anything you Want. Bonus 2: Why 1980s Nightcrawler dont care about your prompts. With Unedited Image Samples.
-
How to do Multiple chars in 1 image
There are some ideas to create multiple different subjects, such as this extension for automatic (https://github.com/Extraltodeus/multi-subject-render), or Area Composition if you are using ComfyUI (https://comfyanonymous.github.io/ComfyUI_examples/area_composition/).
- How to detail 2 objects, each with its own qualities in prompt?
- Ladies in sexy pajamas
- Uhhhhh
-
Tips for creating picture with multiple characters?
you can do it with https://github.com/Extraltodeus/multi-subject-render but i don't really know how to use it
-
What are you struggling to do?
There is an extension called multi-subject-render that allows you to provide one prompt for the background and a second prompt for the foreground.
What are some alternatives?
civitai - A repository of models, textual inversions, and more
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
stable-diffusion-webui-distributed - Chains stable-diffusion-webui instances together to facilitate faster image generation.
stable-diffusion-webui - Stable Diffusion web UI
sdweb-merge-board - Multi-step automation merge tool. Extension/Script for Stable Diffusion UI by AUTOMATIC1111 https://github.com/AUTOMATIC1111/stable-diffusion-webui
3d-photo-inpainting - [CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting
sd-webui-reactor-force - Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111, SD.Next, Cagliostro) with NVIDIA GPU Support
Merge-Stable-Diffusion-models-without-distortion - Adaptation of the merging method described in the paper - Git Re-Basin: Merging Models modulo Permutation Symmetries (https://arxiv.org/abs/2209.04836) for Stable Diffusion
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
stable-diffusion-webui - Stable Diffusion web UI
Lora-for-Diffusers - The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers🔥