surface_normal_uncertainty
calibrated-backprojection-network
surface_normal_uncertainty | calibrated-backprojection-network | |
---|---|---|
1 | 3 | |
208 | 112 | |
- | - | |
10.0 | 0.0 | |
over 1 year ago | 10 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
surface_normal_uncertainty
-
Non official colab to create normal maps using "Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation" from baegwangbin
I needed normal maps for my movie and I saw in the new ControlNet update that they used https://github.com/baegwangbin/surface_normal_uncertainty to make normals which gave me better results than previous methods. So I decided to make a colab to process all my images because I didn't find how to do it in Automatic 1111.
calibrated-backprojection-network
-
ICCV2021 oral paper improves generalization across sensor platforms
Our work "Unsupervised Depth Completion with Calibrated Backprojection Layers" has been accepted as an oral paper at ICCV 2021! We will be giving our talk during Session 10 (10/13 2-3 pm PST / 5-6 pm EST and 10/15 7-8 am PST / 10-11 am EST, https://www.eventscribe.net/2021/ICCV/fsPopup.asp?efp=WlJFS0tHTEMxNTgzMA%20&PosterID=428697%20&rnd=0.4100732&mode=posterinfo). This is joint work with Stefano Soatto at the UCLA Vision Lab.
In a nutshell: we propose a method for point cloud densification (from camera, IMU, range sensor) that can generalize well across different sensor platforms. The figure in this link illustrates our improvement over existing works: https://github.com/alexklwong/calibrated-backprojection-network/blob/master/figures/overview_teaser.gif
The slightly longer version: previous methods, when trained on one sensor platform, have problem generalizing to different ones when deployed to the wild. This is because they are overfitted to the sensors used to collect the training set. Our method takes image, sparse point cloud and camera calibration as input, which allows us to use a different calibration at test time. This significantly improves generalization to novel scenes captured by sensors different than those used during training. Amongst our innovations is a "calibrated backprojection layer" that imposes strong inductive bias on the network (as opposed trying to learn everything from the data). This design allows our method to achieve the state of the art on both indoor and outdoor scenarios while using a smaller model size and boasting a faster inference time.
For those interested, here are the links to
paper: https://arxiv.org/pdf/2108.10531.pdf
code (pytorch): https://github.com/alexklwong/calibrated-backprojection-network
-
[R] ICCV2021 oral paper -- Unsupervised Depth Completion with Calibrated Backprojection Layers improves generalization across sensor platforms
Code for https://arxiv.org/abs/2108.10531 found: https://github.com/alexklwong/calibrated-backprojection-network
What are some alternatives?
2dimageto3dmodel - We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
EasyCV - An all-in-one toolkit for computer vision