xivo
open_vins
Our great sponsors
xivo | open_vins | |
---|---|---|
2 | 5 | |
828 | 1,988 | |
0.0% | 4.0% | |
0.0 | 6.9 | |
about 1 year ago | 3 months ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xivo
-
Unsupervised Depth Completion from Visual Inertial Odometry
Hey there, interested in camera and range sensor fusion for point cloud (depth) completion?
Here is an extended version of our [talk](https://www.youtube.com/watch?v=oBCKO4TH5y0) at ICRA 2020 where we do a step by step walkthrough of our paper Unsupervised Depth Completion from Visual Inertial Odometry (joint work with Fei Xiaohan, Stephanie Tsuei, and Stefano Soatto).
In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, here is an [example](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry/blob/master/figures/void_teaser.gif?raw=true) of the point clouds produced by our method.
Our method is light-weight (so you can run it on your computer!) and is built on top of [XIVO] (https://github.com/ucla-vision/xivo) our VIO system.
For those interested here are links to the [paper](https://arxiv.org/pdf/1905.08616.pdf), [code](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry) and the [dataset](https://github.com/alexklwong/void-dataset) we collected.
-
[N][R] ICRA 2020 extended talk for Unsupervised Depth Completion from Visual Inertial Odometry
Our method is light-weight (so you can run it on your computer!) and is built on top of XIVO our VIO system.
open_vins
-
Modular Open Source Visual SLAM
From what I have understood after reading research papers related to the VSLAM, the modularity aspect is not easy to achieve given the extracted features and descriptors are intrinsically linked with feature matching and handling of map points. I would like to know if there are some good Open Source VSLAM projects available which can be used with different feature extractors so I can get a comparative results with respect to just changing the feature extractors . I have tried pyslam project which is actually quite good considering the modularity but as the author himself points out this is only for academic purposes and when I compared the results of ORB_SLAM2 feature extractor using this module vs the original ORB_SLAM2 for KITTI data set , the results are not comparable. I am also looking into OpenVINS ( and from initial reading it is also using ORB Features, although it does have a base Tracker class which can be modified to create a new Tracker with different descriptor) If anyone has worked with custom feature extractor incorporated into prebuilt SLAM pipeline and can guide me as to how to proceed with the implementation of custom Feature extractor into a SLAM Front end using a Open Source VSLAM framework, it will be really helpful.
-
SLAM vs. Visual Odometry Approaches
Because the standard MSCKF is the only one that doesn't contain the map points in the state. Note that this is only for the standard MSCKF. More modern MSCKFS variations like OpenVINS will actually add some SLAM features because it improves the accuracy.
-
Advances in SLAM since 2016
Aside from that there have been some publications of some high quality open source SLAM systems like OpenVINS and ORB-SLAM3.
-
Sfm or slam pseudo code
Check out open vins. Its an implementation of the vins slam project. https://github.com/rpng/open_vins
-
Visual Odometry or SLAM with pose uncertainty output
Generally you want to use a Kalman Filter based method if you want access to the uncertainties. This is because it is much easier to extract a subset of the covariance in the kalman filter form. I would recommend OpenVins. One of the best open source visual odometry projects, and it is pretty well documented.
What are some alternatives?
rtabmap - RTAB-Map library and standalone application
ORB_SLAM3 - ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
unsupervised-depth-completion-visual-inertial-odometry - Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
void-dataset - Visual Odometry with Inertial and Depth (VOID) dataset
openvslam - OpenVSLAM: A Versatile Visual SLAM Framework
r3live - A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
msckf_vio - Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight
Open3D - Open3D: A Modern Library for 3D Data Processing
SuperPoint_SLAM - SuperPoint + ORB_SLAM2
SuperGluePretrainedNetwork - SuperGlue: Learning Feature Matching with Graph Neural Networks (CVPR 2020, Oral)