monodepth2
deep-learning-v2-pytorch
Our great sponsors
monodepth2 | deep-learning-v2-pytorch | |
---|---|---|
6 | 1 | |
3,974 | 5,167 | |
1.5% | 0.7% | |
0.0 | 0.0 | |
7 months ago | 10 months ago | |
Jupyter Notebook | Jupyter Notebook | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
monodepth2
- Calculation of an absolute depth map from multiple images or videos.
-
Easy to train a monocular (self) supervised depth estimation model?
I've used monodepth2 before and it's great: https://github.com/nianticlabs/monodepth2
-
Sources: Pixel 6 Pro was supposed to launch with face unlock
How can a single camera do that? My experience with computer vision is fairly limited so I'm curious how that would work. My understanding is you need to be able to generate a point map, stereo vision, or some non-CV related method , e.g. radar like the pixel 4. 2D depth estimation can be done with a single camera in somewhat useful way but it's not a secure way (https://github.com/nianticlabs/monodepth2 -- now somewhat similar functionality in OpenCV). Can you expand on what AI the single camera is being combined with that provides security guarantees?
- Can anyone explain the following github code to me. Also it’s my first time using GitHub so I’m completely lost.
-
Estimating camera height, orientation and field of view from a single monocular image.
I suspect you may have the best success by using monocular depth approaches (for example something like this: https://github.com/nianticlabs/monodepth2).
-
Looking for a fast monocular depth estimation library to use in a Rust project.
After that I have to do the same for Python I think, and then I have to find out how to figure out how to use a library like https://github.com/ialhashim/DenseDepth or https://github.com/nianticlabs/monodepth2 for that GStreamer plugin (or element, still trying to grasp the terminology here)
deep-learning-v2-pytorch
-
how can i activate the cells in this github
in this link deep-learning-v2-pytorch/StudentAdmissions.ipynb at master · udacity/deep-learning-v2-pytorch · GitHub
What are some alternatives?
DenseDepth - High Quality Monocular Depth Estimation via Transfer Learning
cs231n - Note and Assignments for CS231n: Convolutional Neural Networks for Visual Recognition
packnet-sfm - TRI-ML Monocular Depth Estimation Repository
stable-diffusion-reference-only - img2img version of stable diffusion. Anime Character Remix. Line Art Automatic Coloring. Style Transfer.
torchdyn - A PyTorch library entirely dedicated to neural differential equations, implicit models and related numerical methods
hyperlearn - 2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.
ZoeDepth - Metric depth estimation from a single image
glasses - High-quality Neural Networks for Computer Vision 😎
gan-vae-pretrained-pytorch - Pretrained GANs + VAEs + classifiers for MNIST/CIFAR in pytorch.