BoostingMonocularDepth
By compphoto
BoostYourOwnDepth
Apply our monocular depth boosting to your own network! (by compphoto)
BoostingMonocularDepth | BoostYourOwnDepth | |
---|---|---|
7 | 2 | |
1,446 | 239 | |
- | - | |
6.6 | 3.5 | |
2 months ago | about 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
BoostingMonocularDepth
Posts with mentions or reviews of BoostingMonocularDepth.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-12-09.
-
Midas Resolution in Controlnet
I suggest using this instead: https://github.com/compphoto/BoostingMonocularDepth
- Boosting Monocular Depth repo
-
Cthulhu Coin Render using Generated Image
Alternatively maybe run the original image through Boosting Monocular Depth / MiDAS to generate the height map and use that in Substance Designer to generate the other maps (I've only tried this with environment images, not textures) https://github.com/compphoto/BoostingMonocularDepth
-
Does anyone have tips for creating depth maps from 2D footage?
With Boosting Monocular Depth with MidasAI or LeresAI you can batch process multiple images (frames) by default with the Colab. You just load them into the "input" folder. https://github.com/compphoto/BoostingMonocularDepth/blob/main/Boostmonoculardepth.ipynb (with the free version of Colab you might only be allowed a couple of hours a day). To download a bunch of depth map frames quickly you need to link your Google Drive to the Colab and then you can drag them directly into a Drive folder from the Colab. Also now there is a (Midas-based I think) After Effects solution. https://aescripts.com/depth-scanner/ but I get an error trying to run it with my hardware
-
VC#4 - pancake - vc.ajmoon.uk - VQGAN/CLIP + 3D Photo Inpainting + Image Super-Resolution
Watch out for the depth model though. By default that uses BoostingMonocularDepth, which is adobe.
-
High-resolution depth estimation from a single image
All the links over here https://github.com/compphoto/BoostingMonocularDepth
BoostYourOwnDepth
Posts with mentions or reviews of BoostYourOwnDepth.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-12-09.
-
Boosting Monocular Depth repo
We present a stand-alone implementation of our Merging Operator. This new repo allows using any pair of monocular depth estimations in our double estimation. This includes using separate networks for base and high-res estimations, using networks not supported by this repo (such as Midas-v3), or using manually edited depth maps for artistic use. This will also be useful for scientists developing CNN-based MDE as a way to quickly apply double estimation to their own network. For more details please take a look here.
-
Experiments with SD inpainting model and rotation
2) Boost the monocular depth with another Neural Network: https://github.com/compphoto/BoostYourOwnDepth
What are some alternatives?
When comparing BoostingMonocularDepth and BoostYourOwnDepth you can also consider the following projects:
3d-photo-inpainting - [CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"