ZoeDepth
DenseDepth
ZoeDepth | DenseDepth | |
---|---|---|
4 | 5 | |
1,972 | 1,533 | |
6.0% | - | |
0.0 | 0.0 | |
6 days ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ZoeDepth
- Software 3D scanner. Free on Prusa Printables
-
🚀 Deep Learning for Deep Objects: ZoeDepth is an AI Model for Multi-Domain Depth Estimation
Quick Read: https://www.marktechpost.com/2023/03/03/deep-learning-for-deep-objects-zoedepth-is-an-ai-model-for-multi-domain-depth-estimation/ Paper: https://arxiv.org/pdf/2302.12288.pdf Github: https://github.com/isl-org/ZoeDepth
- Testing ControlNet on Unreal Engine 5
- ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth
DenseDepth
-
How to Estimate Depth from a Single Image
For a long time, the state-of-the-art models for monocular depth estimation such as DORN and DenseDepth were built with convolutional neural networks. Recently, however, both transformer-based models such as DPT and GLPN, and diffusion-based models like Marigold have achieved remarkable results!
-
Turn your photo into Christmas ball. A bas-relief, not a lithophany, so no internal lightning is needed! My free programs 'Amazing STL Creator' are only at Prusa Printables
Instead of just doing grayscale for depth, you should consider using “monocular depth estimation” that actually tries to reconstruct a depth map from an RGB image. There are a variety of open-source libs available like this one.
- alô meus programadores do r/brasil preciso de uma ajuda...
-
Looking for a fast monocular depth estimation library to use in a Rust project.
After that I have to do the same for Python I think, and then I have to find out how to figure out how to use a library like https://github.com/ialhashim/DenseDepth or https://github.com/nianticlabs/monodepth2 for that GStreamer plugin (or element, still trying to grasp the terminology here)
-
MiDaS - Monocular Depth Estimation -- Includes an Optimized Model for ROS
Others model implemented like github.com/ialhashim/DenseDepth have constraints on input (I think only 4:3 is one of them, if I remember correctly)
What are some alternatives?
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
Online3DViewer - A solution to visualize and explore 3D models in your browser.
monodepth2 - [ICCV 2019] Monocular depth estimation from a single image
Depth-Anything - [CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
Deep-Learning-Push-Up-Counter - Deep Learning approach to count the number of repetitions in a video of push ups or pull ups.
LFattNet - Attention-based View Selection Networks for Light-field Disparity Estimation
HugsVision - HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
analytics-zoo - Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray