unsupervised-depth-completion-visual-inertial-odometry

Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020) (by alexklwong)

Unsupervised-depth-completion-visual-inertial-odometry Alternatives

Similar projects and alternatives to unsupervised-depth-completion-visual-inertial-odometry

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better unsupervised-depth-completion-visual-inertial-odometry alternative or higher similarity.

unsupervised-depth-completion-visual-inertial-odometry reviews and mentions

Posts with mentions or reviews of unsupervised-depth-completion-visual-inertial-odometry. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-08-30.
  • Unsupervised Depth Completion from Visual Inertial Odometry
    3 projects | news.ycombinator.com | 30 Aug 2021
    Hey there, interested in camera and range sensor fusion for point cloud (depth) completion?

    Here is an extended version of our [talk](https://www.youtube.com/watch?v=oBCKO4TH5y0) at ICRA 2020 where we do a step by step walkthrough of our paper Unsupervised Depth Completion from Visual Inertial Odometry (joint work with Fei Xiaohan, Stephanie Tsuei, and Stefano Soatto).

    In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, here is an [example](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry/blob/master/figures/void_teaser.gif?raw=true) of the point clouds produced by our method.

    Our method is light-weight (so you can run it on your computer!) and is built on top of [XIVO] (https://github.com/ucla-vision/xivo) our VIO system.

    For those interested here are links to the [paper](https://arxiv.org/pdf/1905.08616.pdf), [code](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry) and the [dataset](https://github.com/alexklwong/void-dataset) we collected.

  • [N][R] ICRA 2020 extended talk for Unsupervised Depth Completion from Visual Inertial Odometry
    4 projects | /r/MachineLearning | 30 Aug 2021
    In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, you can visit our github page for examples (gifs) of point clouds produced by our method.