mmselfsup
calibrated-backprojection-network
mmselfsup | calibrated-backprojection-network | |
---|---|---|
5 | 3 | |
3,212 | 121 | |
1.3% | 2.5% | |
5.3 | 2.0 | |
over 1 year ago | 3 months ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mmselfsup
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
-
Does anyone know how a loss curve like this can happen? Details in comments
For some reason, the loss goes up shaply right at the start and slowly goes back down. I am self-supervised pretraining an image modeling with DenseCL using mmselfsup (https://github.com/open-mmlab/mmselfsup). This shape happened on the Coco-2017 dataset and my custom dataset. As you can see, it happens consistently for different runs. How could the loss increase so sharply and is it indicative of an issue with the training? The loss peaks before the first epoch is finished. Unfortunately, the library does not support validation.
- Defect Detection using RPI
- [D] State-of-the-Art for Self-Supervised (Pre-)Training of CNN architectures (e.g. ResNet)?
- Rebirth! OpenSelfSup is upgraded to MMSelfSup
calibrated-backprojection-network
-
ICCV2021 oral paper improves generalization across sensor platforms
Our work "Unsupervised Depth Completion with Calibrated Backprojection Layers" has been accepted as an oral paper at ICCV 2021! We will be giving our talk during Session 10 (10/13 2-3 pm PST / 5-6 pm EST and 10/15 7-8 am PST / 10-11 am EST, https://www.eventscribe.net/2021/ICCV/fsPopup.asp?efp=WlJFS0tHTEMxNTgzMA%20&PosterID=428697%20&rnd=0.4100732&mode=posterinfo). This is joint work with Stefano Soatto at the UCLA Vision Lab.
In a nutshell: we propose a method for point cloud densification (from camera, IMU, range sensor) that can generalize well across different sensor platforms. The figure in this link illustrates our improvement over existing works: https://github.com/alexklwong/calibrated-backprojection-network/blob/master/figures/overview_teaser.gif
The slightly longer version: previous methods, when trained on one sensor platform, have problem generalizing to different ones when deployed to the wild. This is because they are overfitted to the sensors used to collect the training set. Our method takes image, sparse point cloud and camera calibration as input, which allows us to use a different calibration at test time. This significantly improves generalization to novel scenes captured by sensors different than those used during training. Amongst our innovations is a "calibrated backprojection layer" that imposes strong inductive bias on the network (as opposed trying to learn everything from the data). This design allows our method to achieve the state of the art on both indoor and outdoor scenarios while using a smaller model size and boasting a faster inference time.
For those interested, here are the links to
paper: https://arxiv.org/pdf/2108.10531.pdf
code (pytorch): https://github.com/alexklwong/calibrated-backprojection-network
-
[R] ICCV2021 oral paper -- Unsupervised Depth Completion with Calibrated Backprojection Layers improves generalization across sensor platforms
Code for https://arxiv.org/abs/2108.10531 found: https://github.com/alexklwong/calibrated-backprojection-network
What are some alternatives?
anomalib - An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
unsupervised-depth-completion-visual-inertial-odometry - Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
Unsupervised-Semantic-Segmentation - Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals. [ICCV 2021]
surface_normal_uncertainty - [ICCV 2021 Oral] Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation
Revisiting-Contrastive-SSL - Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
eirli - An Empirical Investigation of Representation Learning for Imitation (EIRLI), NeurIPS'21
mmdeploy - OpenMMLab Model Deployment Framework
simplerecon - [ECCV 2022] SimpleRecon: 3D Reconstruction Without 3D Convolutions
SparK - [ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
EasyCV - An all-in-one toolkit for computer vision
mmagic - OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
3d-transforms - 3D Transforms is a library to easily work with 3D data and make 3D transformations. This library originally started as a few functions here and there for my own work which I then turned into a library.