unsupervised-depth-completion-visual-inertial-odometry
bpycv
Our great sponsors
unsupervised-depth-completion-visual-inertial-odometry | bpycv | |
---|---|---|
2 | 3 | |
183 | 455 | |
- | - | |
5.0 | 4.6 | |
10 months ago | about 2 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
unsupervised-depth-completion-visual-inertial-odometry
-
Unsupervised Depth Completion from Visual Inertial Odometry
Hey there, interested in camera and range sensor fusion for point cloud (depth) completion?
Here is an extended version of our [talk](https://www.youtube.com/watch?v=oBCKO4TH5y0) at ICRA 2020 where we do a step by step walkthrough of our paper Unsupervised Depth Completion from Visual Inertial Odometry (joint work with Fei Xiaohan, Stephanie Tsuei, and Stefano Soatto).
In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, here is an [example](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry/blob/master/figures/void_teaser.gif?raw=true) of the point clouds produced by our method.
Our method is light-weight (so you can run it on your computer!) and is built on top of [XIVO] (https://github.com/ucla-vision/xivo) our VIO system.
For those interested here are links to the [paper](https://arxiv.org/pdf/1905.08616.pdf), [code](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry) and the [dataset](https://github.com/alexklwong/void-dataset) we collected.
-
[N][R] ICRA 2020 extended talk for Unsupervised Depth Completion from Visual Inertial Odometry
In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, you can visit our github page for examples (gifs) of point clouds produced by our method.
bpycv
- Bpycv: Computer Vision and Deep Learning Utils for Blender
-
Python code that takes pictures from different view points of a 3d model
Blender is capable of running in a headless mode. I ‘ve had the exact same use case nearly a thousand times. This library worked out well for my purposes https://github.com/DIYer22/bpycv
-
[P] Synthetic Data for CV with Python and Blender
Can someone who knows a bit about this compare to, e.g., bpycv? Seems like they are doing very similar stuff
What are some alternatives?
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
labelme2coco - A lightweight package for converting your labelme annotations into COCO object detection format.
dino - PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
zpy - Synthetic data for computer vision. An open source toolkit using Blender and Python.
calibrated-backprojection-network - PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)
labelme - Image Polygonal Annotation with Python (polygon, rectangle, circle, line, point and image-level flag annotation).
xivo - X Inertial-aided Visual Odometry
sahi - Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
simclr - SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
learning-topology-synthetic-data - Tensorflow implementation of Learning Topology from Synthetic Data for Unsupervised Depth Completion (RAL 2021 & ICRA 2021)
void-dataset - Visual Odometry with Inertial and Depth (VOID) dataset
pyrender - Easy-to-use glTF 2.0-compliant OpenGL renderer for visualization of 3D scenes.