3d-photo-inpainting
multi-subject-render
3d-photo-inpainting | multi-subject-render | |
---|---|---|
22 | 18 | |
6,828 | 359 | |
0.1% | - | |
0.0 | 2.5 | |
8 months ago | about 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
3d-photo-inpainting
- I have an AI Generated jpg. I want to add subtle looping animation to it
-
Whats the latest and greatest in 3d img2img/txt2img?
If you are looking to create actual 3d models, the DepthMap extension does have a function to create PLY models with vertex color information, and to render clips with simple camera moves from that extracted 3d scene, including inpainting (as per the 3d-photo-inpainting paper)
-
Quick test of AI and Blender with camera projection.
The depthmap extension for A1111 has implemented the 3d-photo-inpainting code that is doing that kind of thing. That's what I used to use, first on a Colab, and then adapted for windows so I could run it locally. But it's much more convenient to do it directly from the Automatic1111 WebUI.
- Is there an extension that does this?
-
Generate multiple complex subjects on a single image all at once with a depth aware custom extension!
But things are even older than stable diffusion.
-
Coronal mass ejection of the sun. Image from r/space. Crossview ML generated
It's a slightly modified version of https://shihmengli.github.io/3D-Photo-Inpainting/
-
[R] META researchers generate realistic renders from unseen views of any human captured from a single-view RGB-D camera
Thanks! I barely did anything though, just took a deep dream'ed photo made by another artist (Daniel Ambrosi) and passed it through this: https://shihmengli.github.io/3D-Photo-Inpainting/ (github and colab at bottom). Didn't even have to come up with the camera trajectory, was one of the presets in the repo
-
Tumultuous Seas
pretty sure it's this: https://github.com/vt-vl-lab/3d-photo-inpainting
- These are the raw frames I got from Gaugan2, but I'll be posting modified versions in the comment section.
- 3D Photography Using Context-Aware Layered Depth Inpainting
multi-subject-render
-
Creating pictures of multiple people with distinct faces
You can use the multi subject renderer https://github.com/Extraltodeus/multi-subject-render.git
-
Can I use SD to generate group pictures (of say, me and my cousin, or me and multiple cousins)?
Get this Extension, and as always, please read the docs to avoid problems.
-
Find it hard to tune my prompt for more than 2 characters
There's also a script/extension https://github.com/Extraltodeus/multi-subject-render but it's fiddily to get work right, and i think the other workflow is faster.
- Textual Inversion: TI TLDR for the Lazy. How to Make Fake People: Simple TI Traning Using 6 Images and very low Settings. Bonus 1: How to Make Fake People that Look Like Anything you Want. Bonus 2: Why 1980s Nightcrawler dont care about your prompts. With Unedited Image Samples.
-
How to do Multiple chars in 1 image
There are some ideas to create multiple different subjects, such as this extension for automatic (https://github.com/Extraltodeus/multi-subject-render), or Area Composition if you are using ComfyUI (https://comfyanonymous.github.io/ComfyUI_examples/area_composition/).
- How to detail 2 objects, each with its own qualities in prompt?
- Ladies in sexy pajamas
- Uhhhhh
-
Tips for creating picture with multiple characters?
you can do it with https://github.com/Extraltodeus/multi-subject-render but i don't really know how to use it
-
What are you struggling to do?
There is an extension called multi-subject-render that allows you to provide one prompt for the background and a second prompt for the foreground.
What are some alternatives?
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
cupscale - Image Upscaling GUI based on ESRGAN
stable-diffusion-webui-distributed - Chains stable-diffusion-webui instances together to facilitate faster image generation.
image-super-resolution - 🔎 Super-scale your images and run experiments with Residual Dense and Adversarial Networks.
depthmap2mask - Create masks out of depthmaps in img2img
Real-ESRGAN - Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
sdweb-merge-board - Multi-step automation merge tool. Extension/Script for Stable Diffusion UI by AUTOMATIC1111 https://github.com/AUTOMATIC1111/stable-diffusion-webui
caire - Content aware image resize library
sd-webui-reactor-force - Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111, SD.Next, Cagliostro) with NVIDIA GPU Support
BoostingMonocularDepth
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"