model_zoo
void-dataset
model_zoo | void-dataset | |
---|---|---|
1 | 3 | |
3 | 103 | |
- | - | |
7.6 | 0.0 | |
9 months ago | almost 2 years ago | |
Shell | Shell | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
model_zoo
-
Hi do you guys have any labelled dataset for training small robots ro recognize common objects?
You may want to try the Remyx engine - you can train a model by describing the labels you want without having to upload labeled data. Here's a demo. Your first model is free :) Here is our model zoo with some common object models that might be useful too.
void-dataset
-
Unsupervised Depth Completion from Visual Inertial Odometry
Hey there, interested in camera and range sensor fusion for point cloud (depth) completion?
Here is an extended version of our [talk](https://www.youtube.com/watch?v=oBCKO4TH5y0) at ICRA 2020 where we do a step by step walkthrough of our paper Unsupervised Depth Completion from Visual Inertial Odometry (joint work with Fei Xiaohan, Stephanie Tsuei, and Stefano Soatto).
In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, here is an [example](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry/blob/master/figures/void_teaser.gif?raw=true) of the point clouds produced by our method.
Our method is light-weight (so you can run it on your computer!) and is built on top of [XIVO] (https://github.com/ucla-vision/xivo) our VIO system.
For those interested here are links to the [paper](https://arxiv.org/pdf/1905.08616.pdf), [code](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry) and the [dataset](https://github.com/alexklwong/void-dataset) we collected.
-
[N][R] ICRA 2020 extended talk for Unsupervised Depth Completion from Visual Inertial Odometry
Code for https://arxiv.org/abs/1905.08616 found: https://github.com/alexklwong/void-dataset
What are some alternatives?
video-inference - Example showing how to do inference on a video file with Roboflow Infer
unsupervised-depth-completion-visual-inertial-odometry - Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
xivo - X Inertial-aided Visual Odometry
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
DAD-3DHeads - Official repo for DAD-3DHeads: A Large-scale Dense, Accurate and Diverse Dataset for 3D Head Alignment from a Single Image (CVPR 2022).
learning-topology-synthetic-data - Tensorflow implementation of Learning Topology from Synthetic Data for Unsupervised Depth Completion (RAL 2021 & ICRA 2021)