Python depth-estimation

Open-source Python projects categorized as depth-estimation Edit details

Top 12 Python depth-estimation Projects

  • OpenSeeFace

    Robust realtime face and facial landmark tracking on CPU with Unity integration

    Project mention: Running OpenSeeFace on Linux with python 3.10 | reddit.com/r/VirtualYoutubers | 2022-06-22
  • MonoRec

    Official implementation of the paper: MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera (CVPR 2021)

    Project mention: Questions for SLAM/SfM for Dense 3D Reconstruction (DSO vs ORB, Monofusion etc.) | reddit.com/r/computervision | 2022-03-11

    I've stumbled upon this and that using DL, and will try to check to simultaneously evaluate them next to developing something using pySLAM. At least that's the current plan.

  • SonarLint

    Deliver Cleaner and Safer Code - Right in Your IDE of Choice!. SonarLint is a free and open source IDE extension that identifies and catches bugs and vulnerabilities as you code, directly in the IDE. Install from your favorite IDE marketplace today.

  • manydepth

    [CVPR 2021] Self-supervised depth estimation from short sequences

  • stereoDepth

    single and stereo calibration, disparity calculation.

    Project mention: converting a disparity map to a depth map given calibration file | reddit.com/r/computervision | 2022-04-24

    First, I believe you'll need to get the Q matrix from stereoCalibrate/stereoRectify (see https://github.com/aliyasineser/stereoDepth/blob/master/stereo_camera_calibration.py)

  • deep-video-mvs

    Code for "DeepVideoMVS: Multi-View Stereo on Video with Recurrent Spatio-Temporal Fusion" (CVPR 2021)

    Project mention: helpful pointers to state-of-the-art material for depth estimation from multi-view videos captured from cameras with arbitrary poses. | reddit.com/r/computervision | 2022-04-20

    If deep learning is an option, then you might want to check out http://zak.murez.com/atlas/, https://zju3dv.github.io/neuralrecon/, https://github.com/ardaduz/deep-video-mvs and the references therein. These methods can be better than classical ones, especially if overfitted on a specific type of scene.

  • Insta-DM

    Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency (AAAI 2021)

  • DiverseDepth

    The code and data of DiverseDepth

  • Scout APM

    Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.

  • unsupervised-depth-completion-visual-inertial-odometry

    Tensorflow implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)

    Project mention: Unsupervised Depth Completion from Visual Inertial Odometry | news.ycombinator.com | 2021-08-30

    Hey there, interested in camera and range sensor fusion for point cloud (depth) completion?

    Here is an extended version of our [talk](https://www.youtube.com/watch?v=oBCKO4TH5y0) at ICRA 2020 where we do a step by step walkthrough of our paper Unsupervised Depth Completion from Visual Inertial Odometry (joint work with Fei Xiaohan, Stephanie Tsuei, and Stefano Soatto).

    In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, here is an [example](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry/blob/master/figures/void_teaser.gif?raw=true) of the point clouds produced by our method.

    Our method is light-weight (so you can run it on your computer!) and is built on top of [XIVO] (https://github.com/ucla-vision/xivo) our VIO system.

    For those interested here are links to the [paper](https://arxiv.org/pdf/1905.08616.pdf), [code](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry) and the [dataset](https://github.com/alexklwong/void-dataset) we collected.

  • SGDepth

    [ECCV 2020] Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance

  • calibrated-backprojection-network

    PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)

    Project mention: ICCV2021 oral paper improves generalization across sensor platforms | news.ycombinator.com | 2021-10-13

    Our work "Unsupervised Depth Completion with Calibrated Backprojection Layers" has been accepted as an oral paper at ICCV 2021! We will be giving our talk during Session 10 (10/13 2-3 pm PST / 5-6 pm EST and 10/15 7-8 am PST / 10-11 am EST, https://www.eventscribe.net/2021/ICCV/fsPopup.asp?efp=WlJFS0tHTEMxNTgzMA%20&PosterID=428697%20&rnd=0.4100732&mode=posterinfo). This is joint work with Stefano Soatto at the UCLA Vision Lab.

    In a nutshell: we propose a method for point cloud densification (from camera, IMU, range sensor) that can generalize well across different sensor platforms. The figure in this link illustrates our improvement over existing works: https://github.com/alexklwong/calibrated-backprojection-network/blob/master/figures/overview_teaser.gif

    The slightly longer version: previous methods, when trained on one sensor platform, have problem generalizing to different ones when deployed to the wild. This is because they are overfitted to the sensors used to collect the training set. Our method takes image, sparse point cloud and camera calibration as input, which allows us to use a different calibration at test time. This significantly improves generalization to novel scenes captured by sensors different than those used during training. Amongst our innovations is a "calibrated backprojection layer" that imposes strong inductive bias on the network (as opposed trying to learn everything from the data). This design allows our method to achieve the state of the art on both indoor and outdoor scenarios while using a smaller model size and boasting a faster inference time.

    For those interested, here are the links to

    paper: https://arxiv.org/pdf/2108.10531.pdf

    code (pytorch): https://github.com/alexklwong/calibrated-backprojection-network

  • merged_depth

    Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models

  • learning-topology-synthetic-data

    Tensorflow implementation of Learning Topology from Synthetic Data for Unsupervised Depth Completion (RAL 2021 & ICRA 2021)

    Project mention: Want to use synthetic data, but don't want to deal with domain gap? | news.ycombinator.com | 2021-09-24

    For those interested, here are our source code with pretrained mdoels (it is light-weight so it runs on your local machine!) and arxiv version of our paper.

    paper: https://arxiv.org/pdf/2106.02994.pdf

    Here are some of the reconstructions produced by our method:

    https://github.com/alexklwong/learning-topology-synthetic-da...

    https://github.com/alexklwong/learning-topology-synthetic-da...

NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020). The latest post mention was on 2022-06-22.

Python depth-estimation related posts

Index

What are some of the best open-source depth-estimation projects in Python? This list will help you:

Project Stars
1 OpenSeeFace 748
2 MonoRec 422
3 manydepth 414
4 stereoDepth 158
5 deep-video-mvs 155
6 Insta-DM 154
7 DiverseDepth 152
8 unsupervised-depth-completion-visual-inertial-odometry 148
9 SGDepth 148
10 calibrated-backprojection-network 59
11 merged_depth 36
12 learning-topology-synthetic-data 25
Find remote jobs at our new job board 99remotejobs.com. There are 4 new remote jobs listed recently.
Are you hiring? Post a new remote job listing for free.
Developer Ecosystem Survey 2022
Take part in the Developer Ecosystem Survey 2022 by JetBrains and get a chance to win a Macbook, a Nvidia graphics card, or other prizes. We’ll create an infographic full of stats, and you’ll get personalized results so you can compare yourself with other developers.
surveys.jetbrains.com