calibrated-backprojection-network VS unsupervised-depth-completion-visual-inertial-odometry

Compare calibrated-backprojection-network vs unsupervised-depth-completion-visual-inertial-odometry and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
calibrated-backprojection-network unsupervised-depth-completion-visual-inertial-odometry
3 2
110 183
- -
0.0 5.0
10 months ago 9 months ago
Python Python
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

calibrated-backprojection-network

Posts with mentions or reviews of calibrated-backprojection-network. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-10-13.
  • ICCV2021 oral paper improves generalization across sensor platforms
    1 project | news.ycombinator.com | 13 Oct 2021
    Our work "Unsupervised Depth Completion with Calibrated Backprojection Layers" has been accepted as an oral paper at ICCV 2021! We will be giving our talk during Session 10 (10/13 2-3 pm PST / 5-6 pm EST and 10/15 7-8 am PST / 10-11 am EST, https://www.eventscribe.net/2021/ICCV/fsPopup.asp?efp=WlJFS0tHTEMxNTgzMA%20&PosterID=428697%20&rnd=0.4100732&mode=posterinfo). This is joint work with Stefano Soatto at the UCLA Vision Lab.

    In a nutshell: we propose a method for point cloud densification (from camera, IMU, range sensor) that can generalize well across different sensor platforms. The figure in this link illustrates our improvement over existing works: https://github.com/alexklwong/calibrated-backprojection-network/blob/master/figures/overview_teaser.gif

    The slightly longer version: previous methods, when trained on one sensor platform, have problem generalizing to different ones when deployed to the wild. This is because they are overfitted to the sensors used to collect the training set. Our method takes image, sparse point cloud and camera calibration as input, which allows us to use a different calibration at test time. This significantly improves generalization to novel scenes captured by sensors different than those used during training. Amongst our innovations is a "calibrated backprojection layer" that imposes strong inductive bias on the network (as opposed trying to learn everything from the data). This design allows our method to achieve the state of the art on both indoor and outdoor scenarios while using a smaller model size and boasting a faster inference time.

    For those interested, here are the links to

    paper: https://arxiv.org/pdf/2108.10531.pdf

    code (pytorch): https://github.com/alexklwong/calibrated-backprojection-network

  • [R] ICCV2021 oral paper -- Unsupervised Depth Completion with Calibrated Backprojection Layers improves generalization across sensor platforms
    2 projects | /r/MachineLearning | 13 Oct 2021
    Code for https://arxiv.org/abs/2108.10531 found: https://github.com/alexklwong/calibrated-backprojection-network

unsupervised-depth-completion-visual-inertial-odometry

Posts with mentions or reviews of unsupervised-depth-completion-visual-inertial-odometry. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-08-30.
  • Unsupervised Depth Completion from Visual Inertial Odometry
    3 projects | news.ycombinator.com | 30 Aug 2021
    Hey there, interested in camera and range sensor fusion for point cloud (depth) completion?

    Here is an extended version of our [talk](https://www.youtube.com/watch?v=oBCKO4TH5y0) at ICRA 2020 where we do a step by step walkthrough of our paper Unsupervised Depth Completion from Visual Inertial Odometry (joint work with Fei Xiaohan, Stephanie Tsuei, and Stefano Soatto).

    In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, here is an [example](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry/blob/master/figures/void_teaser.gif?raw=true) of the point clouds produced by our method.

    Our method is light-weight (so you can run it on your computer!) and is built on top of [XIVO] (https://github.com/ucla-vision/xivo) our VIO system.

    For those interested here are links to the [paper](https://arxiv.org/pdf/1905.08616.pdf), [code](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry) and the [dataset](https://github.com/alexklwong/void-dataset) we collected.

  • [N][R] ICRA 2020 extended talk for Unsupervised Depth Completion from Visual Inertial Odometry
    4 projects | /r/MachineLearning | 30 Aug 2021
    In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, you can visit our github page for examples (gifs) of point clouds produced by our method.

What are some alternatives?

When comparing calibrated-backprojection-network and unsupervised-depth-completion-visual-inertial-odometry you can also consider the following projects:

EasyCV - An all-in-one toolkit for computer vision

instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more

manydepth - [CVPR 2021] Self-supervised depth estimation from short sequences

dino - PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

mmselfsup - OpenMMLab Self-Supervised Learning Toolbox and Benchmark

xivo - X Inertial-aided Visual Odometry

NeuralRecon - Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

simclr - SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners

simplerecon - [ECCV 2022] SimpleRecon: 3D Reconstruction Without 3D Convolutions

void-dataset - Visual Odometry with Inertial and Depth (VOID) dataset

3d-transforms - 3D Transforms is a library to easily work with 3D data and make 3D transformations. This library originally started as a few functions here and there for my own work which I then turned into a library.

bpycv - Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)