SuperPoint_SLAM
xivo
Our great sponsors
SuperPoint_SLAM | xivo | |
---|---|---|
2 | 2 | |
504 | 828 | |
- | 0.0% | |
1.8 | 0.0 | |
about 3 years ago | about 1 year ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SuperPoint_SLAM
-
Modular Open Source Visual SLAM
Hi everyone, I am trying to implement a VSLAM with DNN specifically the Feature Extraction module in the SLAM pipeline. Something on the lines of this repo Superpoint_SLAM , which integrates SuperPoint Feature extraction into ORB_SLAM2
-
Complete Open Source Deep Learning Implementations For V-SLAM
As you've mentioned, there are many papers on deep local feature extraction, like SuperPoint and R2D2. If you wish to use them in SLAM, you can simply replace the feature extraction module in the existing SLAM system with the deep local feature method. An example is shown here - this system uses SuperPoint as local features instead of ORB features in the original ORB-SLAM 2 pipeline. https://github.com/KinglittleQ/SuperPoint_SLAM
xivo
-
Unsupervised Depth Completion from Visual Inertial Odometry
Hey there, interested in camera and range sensor fusion for point cloud (depth) completion?
Here is an extended version of our [talk](https://www.youtube.com/watch?v=oBCKO4TH5y0) at ICRA 2020 where we do a step by step walkthrough of our paper Unsupervised Depth Completion from Visual Inertial Odometry (joint work with Fei Xiaohan, Stephanie Tsuei, and Stefano Soatto).
In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, here is an [example](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry/blob/master/figures/void_teaser.gif?raw=true) of the point clouds produced by our method.
Our method is light-weight (so you can run it on your computer!) and is built on top of [XIVO] (https://github.com/ucla-vision/xivo) our VIO system.
For those interested here are links to the [paper](https://arxiv.org/pdf/1905.08616.pdf), [code](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry) and the [dataset](https://github.com/alexklwong/void-dataset) we collected.
-
[N][R] ICRA 2020 extended talk for Unsupervised Depth Completion from Visual Inertial Odometry
Our method is light-weight (so you can run it on your computer!) and is built on top of XIVO our VIO system.
What are some alternatives?
rtabmap - RTAB-Map library and standalone application
open_vins - An open source platform for visual-inertial navigation research.
orb_slam_2_ros - A ROS implementation of ORB_SLAM2
openvslam - OpenVSLAM: A Versatile Visual SLAM Framework
unsupervised-depth-completion-visual-inertial-odometry - Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
void-dataset - Visual Odometry with Inertial and Depth (VOID) dataset
pyslam - pySLAM contains a monocular Visual Odometry (VO) pipeline in Python. It supports many modern local features based on Deep Learning.
r3live - A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
maplab - A Modular and Multi-Modal Mapping Framework
Open3D - Open3D: A Modern Library for 3D Data Processing