3d-photo-inpainting
BoostingMonocularDepth
3d-photo-inpainting | BoostingMonocularDepth | |
---|---|---|
22 | 7 | |
6,828 | 1,444 | |
0.1% | - | |
0.0 | 6.6 | |
8 months ago | about 2 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
3d-photo-inpainting
- I have an AI Generated jpg. I want to add subtle looping animation to it
-
Whats the latest and greatest in 3d img2img/txt2img?
If you are looking to create actual 3d models, the DepthMap extension does have a function to create PLY models with vertex color information, and to render clips with simple camera moves from that extracted 3d scene, including inpainting (as per the 3d-photo-inpainting paper)
-
Quick test of AI and Blender with camera projection.
The depthmap extension for A1111 has implemented the 3d-photo-inpainting code that is doing that kind of thing. That's what I used to use, first on a Colab, and then adapted for windows so I could run it locally. But it's much more convenient to do it directly from the Automatic1111 WebUI.
- Is there an extension that does this?
-
Generate multiple complex subjects on a single image all at once with a depth aware custom extension!
But things are even older than stable diffusion.
-
Coronal mass ejection of the sun. Image from r/space. Crossview ML generated
It's a slightly modified version of https://shihmengli.github.io/3D-Photo-Inpainting/
-
[R] META researchers generate realistic renders from unseen views of any human captured from a single-view RGB-D camera
Thanks! I barely did anything though, just took a deep dream'ed photo made by another artist (Daniel Ambrosi) and passed it through this: https://shihmengli.github.io/3D-Photo-Inpainting/ (github and colab at bottom). Didn't even have to come up with the camera trajectory, was one of the presets in the repo
-
Tumultuous Seas
pretty sure it's this: https://github.com/vt-vl-lab/3d-photo-inpainting
- These are the raw frames I got from Gaugan2, but I'll be posting modified versions in the comment section.
- 3D Photography Using Context-Aware Layered Depth Inpainting
BoostingMonocularDepth
-
Midas Resolution in Controlnet
I suggest using this instead: https://github.com/compphoto/BoostingMonocularDepth
- Boosting Monocular Depth repo
-
Cthulhu Coin Render using Generated Image
Alternatively maybe run the original image through Boosting Monocular Depth / MiDAS to generate the height map and use that in Substance Designer to generate the other maps (I've only tried this with environment images, not textures) https://github.com/compphoto/BoostingMonocularDepth
-
Does anyone have tips for creating depth maps from 2D footage?
With Boosting Monocular Depth with MidasAI or LeresAI you can batch process multiple images (frames) by default with the Colab. You just load them into the "input" folder. https://github.com/compphoto/BoostingMonocularDepth/blob/main/Boostmonoculardepth.ipynb (with the free version of Colab you might only be allowed a couple of hours a day). To download a bunch of depth map frames quickly you need to link your Google Drive to the Colab and then you can drag them directly into a Drive folder from the Colab. Also now there is a (Midas-based I think) After Effects solution. https://aescripts.com/depth-scanner/ but I get an error trying to run it with my hardware
-
VC#4 - pancake - vc.ajmoon.uk - VQGAN/CLIP + 3D Photo Inpainting + Image Super-Resolution
Watch out for the depth model though. By default that uses BoostingMonocularDepth, which is adobe.
-
High-resolution depth estimation from a single image
All the links over here https://github.com/compphoto/BoostingMonocularDepth
What are some alternatives?
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
cupscale - Image Upscaling GUI based on ESRGAN
BoostYourOwnDepth - Apply our monocular depth boosting to your own network!
image-super-resolution - 🔎 Super-scale your images and run experiments with Residual Dense and Adversarial Networks.
Real-ESRGAN - Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
caire - Content aware image resize library
sharp - High performance Node.js image processing, the fastest module to resize JPEG, PNG, WebP, AVIF and TIFF images. Uses the libvips library.
nerfmm - (Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters
depthmap2mask - Create masks out of depthmaps in img2img
multi-subject-render - Generate multiple complex subjects all at once!
Serpens-Bledner-Addons