monodepth2
packnet-sfm
Our great sponsors
monodepth2 | packnet-sfm | |
---|---|---|
6 | 1 | |
3,974 | 1,198 | |
1.5% | 0.8% | |
0.0 | 0.0 | |
7 months ago | 10 months ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
monodepth2
- Calculation of an absolute depth map from multiple images or videos.
-
Easy to train a monocular (self) supervised depth estimation model?
I've used monodepth2 before and it's great: https://github.com/nianticlabs/monodepth2
-
Sources: Pixel 6 Pro was supposed to launch with face unlock
How can a single camera do that? My experience with computer vision is fairly limited so I'm curious how that would work. My understanding is you need to be able to generate a point map, stereo vision, or some non-CV related method , e.g. radar like the pixel 4. 2D depth estimation can be done with a single camera in somewhat useful way but it's not a secure way (https://github.com/nianticlabs/monodepth2 -- now somewhat similar functionality in OpenCV). Can you expand on what AI the single camera is being combined with that provides security guarantees?
- Can anyone explain the following github code to me. Also it’s my first time using GitHub so I’m completely lost.
-
Estimating camera height, orientation and field of view from a single monocular image.
I suspect you may have the best success by using monocular depth approaches (for example something like this: https://github.com/nianticlabs/monodepth2).
-
Looking for a fast monocular depth estimation library to use in a Rust project.
After that I have to do the same for Python I think, and then I have to find out how to figure out how to use a library like https://github.com/ialhashim/DenseDepth or https://github.com/nianticlabs/monodepth2 for that GStreamer plugin (or element, still trying to grasp the terminology here)
packnet-sfm
-
Easy to train a monocular (self) supervised depth estimation model?
I would go with https://github.com/TRI-ML/packnet-sfm. There is plenty of support from the community on the Github repo, it is very well known and very well tested. You can see it as a modern version of MonoDepth 2.
What are some alternatives?
DenseDepth - High Quality Monocular Depth Estimation via Transfer Learning
cs231n - Note and Assignments for CS231n: Convolutional Neural Networks for Visual Recognition
torchdyn - A PyTorch library entirely dedicated to neural differential equations, implicit models and related numerical methods
ZoeDepth - Metric depth estimation from a single image
glasses - High-quality Neural Networks for Computer Vision 😎
depth-estimate-gui - Depth Estimate GUI - Windows, Mac, Linux
deep-learning-v2-pytorch - Projects and exercises for the latest Deep Learning ND program https://www.udacity.com/course/deep-learning-nanodegree--nd101
IJCAI2023-CoNR - IJCAI2023 - Collaborative Neural Rendering using Anime Character Sheets
DeepLearning - Contains all my works, references for deep learning
RBOT - Region-based Object Tracking
CVPR2023-DMVFN - CVPR2023 (highlight) - A Dynamic Multi-Scale Voxel Flow Network for Video Prediction