unsupervised-depth-completion-visual-inertial-odometry
instant-ngp
Our great sponsors
unsupervised-depth-completion-visual-inertial-odometry | instant-ngp | |
---|---|---|
2 | 147 | |
183 | 15,329 | |
- | 2.2% | |
5.0 | 6.7 | |
9 months ago | 9 days ago | |
Python | Cuda | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
unsupervised-depth-completion-visual-inertial-odometry
-
Unsupervised Depth Completion from Visual Inertial Odometry
Hey there, interested in camera and range sensor fusion for point cloud (depth) completion?
Here is an extended version of our [talk](https://www.youtube.com/watch?v=oBCKO4TH5y0) at ICRA 2020 where we do a step by step walkthrough of our paper Unsupervised Depth Completion from Visual Inertial Odometry (joint work with Fei Xiaohan, Stephanie Tsuei, and Stefano Soatto).
In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, here is an [example](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry/blob/master/figures/void_teaser.gif?raw=true) of the point clouds produced by our method.
Our method is light-weight (so you can run it on your computer!) and is built on top of [XIVO] (https://github.com/ucla-vision/xivo) our VIO system.
For those interested here are links to the [paper](https://arxiv.org/pdf/1905.08616.pdf), [code](https://github.com/alexklwong/unsupervised-depth-completion-visual-inertial-odometry) and the [dataset](https://github.com/alexklwong/void-dataset) we collected.
-
[N][R] ICRA 2020 extended talk for Unsupervised Depth Completion from Visual Inertial Odometry
In this talk, we present an unsupervised method (no need for human supervision/annotations) for learning to recover dense point clouds from images, captured by cameras, and sparse point clouds, produced by lidar or tracked by visual inertial odometry (VIO) systems. To illustrate what I mean, you can visit our github page for examples (gifs) of point clouds produced by our method.
instant-ngp
- I want a 3d scanner...
-
Mind-blowing results (LORA/Checkpoint mix)
This is really cool! Could you now use something like this to turn the new images in a 3d model? Or even use open pose (controlnet) to generate a bunch of images from different angles and use InstantNeRF to make a 3d model for free!
-
Scanning in real life environments to be viewed in VR >>> taking pictures. Simple process from video -> render, using instant-ngp
It is at this point that you should have Instant-NGP setup. The script for the COLMAP processing is in the repo, as well as the steps to perform it. My exact parameters were 3 fps and 16 aabb. It is pretty helpful to add the scripts directory into path for exact access system-wide.
-
[D] NeRF, LeRF, Prolific Dreamer, Neuralangelo, and a lot of other cool NeRF research
[Project Page] https://nvlabs.github.io/instant-ngp/
-
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
instant-ngp ([1]) from NVIDIA can render NeRF in VR in real-time, assuming a very good desktop video card. Note that instant-ngp is not as photo-realistic as Zip-NeRF. But it's still very good!
1. https://github.com/NVlabs/instant-ngp
- How about Ranger Green?
-
Roast my MC kit
Playing around with neRF AI (https://github.com/NVlabs/instant-ngp) to create some 3d gear reveals. I think this a fun way to show off a kit, what do you think?
- Has anyone tried to generate images from enough angles to feed Nvidia Nerf to make 3D models?
-
Instant NPG: how do minimize noise and maximize quality? Tips welcome!
3 not sure if it's the one you want but the -aabb_scale is a crop. This page recommends trying a large value of 128 for some outdoor scenes: https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md
-
I NeRF'd the new Taco Bell on Rt. 40
I don't know about lumalabs, but basically all NeRF projects these days are based on NVIDIAs Instant neural graphics primitives ( GitHub: instant-ngp). It utilizes COLMAP for SfM (preprocessing step for the neural network) and runs on average Geforce cards pretty good. The fox example (50 photos) on their page literally takes 5 seconds to complete.
What are some alternatives?
dino - PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
awesome-NeRF - A curated list of awesome neural radiance fields papers
calibrated-backprojection-network - PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)
tiny-cuda-nn - Lightning fast C++/CUDA neural network framework
xivo - X Inertial-aided Visual Odometry
nerf-pytorch - A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
simclr - SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
TensoRF - [ECCV 2022] Tensorial Radiance Fields, a novel approach to model and reconstruct radiance fields
void-dataset - Visual Odometry with Inertial and Depth (VOID) dataset
colmap - COLMAP - Structure-from-Motion and Multi-View Stereo
bpycv - Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
instant-meshes - Interactive field-aligned mesh generator