INSTINCT
xivo
INSTINCT | xivo | |
---|---|---|
1 | 2 | |
25 | 842 | |
- | 1.7% | |
9.2 | 0.0 | |
5 days ago | about 1 year ago | |
C++ | C++ | |
Mozilla Public License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
INSTINCT
-
ImPlot: Interactive plotting library, ImGui style
The library is extremely easy to use and plots look amazing.
Screenshot: https://i.imgur.com/8Mc04NB.png
Repository: https://github.com/UniStuttgart-INS/INSTINCT
xivo
-
Unsupervised Depth Completion from Visual Inertial Odometry
Hey there, interested in camera and range sensor fusion for point cloud (depth) completion?
Here is an extended version of our [talk](https://www.youtube.com/watch?v=oBCKO4TH5y0) at ICRA 2020 where we do a step by step walkthrough of our paper Unsupervised Depth Completion from Visual Inertial Odometry (joint work with Fei Xiaohan, Stephanie Tsuei, and Stefano Soatto).
In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, here is an [example](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry/blob/master/figures/void_teaser.gif?raw=true) of the point clouds produced by our method.
Our method is light-weight (so you can run it on your computer!) and is built on top of [XIVO] (https://github.com/ucla-vision/xivo) our VIO system.
For those interested here are links to the [paper](https://arxiv.org/pdf/1905.08616.pdf), [code](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry) and the [dataset](https://github.com/alexklwong/void-dataset) we collected.
-
[N][R] ICRA 2020 extended talk for Unsupervised Depth Completion from Visual Inertial Odometry
Our method is light-weight (so you can run it on your computer!) and is built on top of XIVO our VIO system.
What are some alternatives?
vive-diy-position-sensor - Code & schematics for position tracking sensor using HTC Vive's Lighthouse system and a Teensy board.
open_vins - An open source platform for visual-inertial navigation research.
s60-maps - Yet another maps for Symbian OS
rtabmap - RTAB-Map library and standalone application
r3live - A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
unsupervised-depth-completion-visual-inertial-odometry - Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)