runit_sv_addons
unsupervised-depth-completion-visual-inertial-odometry
runit_sv_addons | unsupervised-depth-completion-visual-inertial-odometry | |
---|---|---|
3 | 2 | |
4 | 185 | |
- | - | |
0.0 | 5.0 | |
over 1 year ago | 10 months ago | |
Shell | Python | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
runit_sv_addons
unsupervised-depth-completion-visual-inertial-odometry
-
Unsupervised Depth Completion from Visual Inertial Odometry
Hey there, interested in camera and range sensor fusion for point cloud (depth) completion?
Here is an extended version of our [talk](https://www.youtube.com/watch?v=oBCKO4TH5y0) at ICRA 2020 where we do a step by step walkthrough of our paper Unsupervised Depth Completion from Visual Inertial Odometry (joint work with Fei Xiaohan, Stephanie Tsuei, and Stefano Soatto).
In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, here is an [example](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry/blob/master/figures/void_teaser.gif?raw=true) of the point clouds produced by our method.
Our method is light-weight (so you can run it on your computer!) and is built on top of [XIVO] (https://github.com/ucla-vision/xivo) our VIO system.
For those interested here are links to the [paper](https://arxiv.org/pdf/1905.08616.pdf), [code](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry) and the [dataset](https://github.com/alexklwong/void-dataset) we collected.
-
[N][R] ICRA 2020 extended talk for Unsupervised Depth Completion from Visual Inertial Odometry
In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, you can visit our github page for examples (gifs) of point clouds produced by our method.
What are some alternatives?
dinit - Service monitoring / "init" system
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
runit-services - Runit service scripts
dino - PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
deezer-void - I really tried to package this for xbps-src. But... Well, this works: native Deezer Desktop on Void Linux, yay! Based on @siphomateke, @SibrenVasse and on @davidbailey00 scripts.
calibrated-backprojection-network - PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)
linux-installer - Universal GNU+Linux installer script
xivo - X Inertial-aided Visual Odometry
sv - Comma (and other) separated values
simclr - SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
dotfiles - :whale2::computer::rocket: dotfiles in docker
void-dataset - Visual Odometry with Inertial and Depth (VOID) dataset