calibrated-backprojection-network
lightly
Our great sponsors
calibrated-backprojection-network | lightly | |
---|---|---|
3 | 16 | |
110 | 2,741 | |
- | 2.0% | |
0.0 | 9.0 | |
10 months ago | 9 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
calibrated-backprojection-network
-
ICCV2021 oral paper improves generalization across sensor platforms
Our work "Unsupervised Depth Completion with Calibrated Backprojection Layers" has been accepted as an oral paper at ICCV 2021! We will be giving our talk during Session 10 (10/13 2-3 pm PST / 5-6 pm EST and 10/15 7-8 am PST / 10-11 am EST, https://www.eventscribe.net/2021/ICCV/fsPopup.asp?efp=WlJFS0tHTEMxNTgzMA%20&PosterID=428697%20&rnd=0.4100732&mode=posterinfo). This is joint work with Stefano Soatto at the UCLA Vision Lab.
In a nutshell: we propose a method for point cloud densification (from camera, IMU, range sensor) that can generalize well across different sensor platforms. The figure in this link illustrates our improvement over existing works: https://github.com/alexklwong/calibrated-backprojection-network/blob/master/figures/overview_teaser.gif
The slightly longer version: previous methods, when trained on one sensor platform, have problem generalizing to different ones when deployed to the wild. This is because they are overfitted to the sensors used to collect the training set. Our method takes image, sparse point cloud and camera calibration as input, which allows us to use a different calibration at test time. This significantly improves generalization to novel scenes captured by sensors different than those used during training. Amongst our innovations is a "calibrated backprojection layer" that imposes strong inductive bias on the network (as opposed trying to learn everything from the data). This design allows our method to achieve the state of the art on both indoor and outdoor scenarios while using a smaller model size and boasting a faster inference time.
For those interested, here are the links to
paper: https://arxiv.org/pdf/2108.10531.pdf
code (pytorch): https://github.com/alexklwong/calibrated-backprojection-network
-
[R] ICCV2021 oral paper -- Unsupervised Depth Completion with Calibrated Backprojection Layers improves generalization across sensor platforms
Code for https://arxiv.org/abs/2108.10531 found: https://github.com/alexklwong/calibrated-backprojection-network
lightly
- Show HN: Lightly – A Python library for self-supervised learning on images
- GitHub - lightly-ai/lightly: A python library for self-supervised learning on images.
- A Python library for self-supervised learning on images
-
[P] Release of lightly 1.2.39 - A python library for self-supervised learning
Another year of has passed, and we’ve seen exciting progress in research around self-supervised learning in computer vision. We’re very excited that some of the recent models such as Masked Autoencoders (MAE) or Masked Siamese Networks (MSN) have been added to our OSS framework.
-
Self-Supervised Models are More Robust and Fair
If you’re interested in self-supervised learning and want to try it out yourself you can check out our open-source repository for self-supervised learning.
-
[D] Can a Siamese Neural Network work for invoice classification?
I assume that you have an image of the invoice. Then using a framework like https://github.com/lightly-ai/lightly with many implemented algorithms is the way to go. And after that step, with model-producing embeddings, you need to compare the embedding of a query with your known database and check if the distance is below some threshold. Of course, pipeline with checking the closest neighbor can be more complicated but I would start with sth really simple.
-
[P] TensorFlow Similarity now self-supervised training
https://github.com/lightly-ai/lightly implements a lot of self supervised models, and had been available for a while.
-
Launch HN: Lightly (YC S21): Label only the data which improves your ML model
modAL indeed has a similar goal of choosing the best subset of data to be labeled. However it has some notable differences:
modAL is built on scikit-learn which is also evident from the suggested workflow. Lightly on the other hand was specifically built for deep learning applications supporting active learning for classification but also object detection and semantic segmentation.
modAL provides uncertainty-based active learning. However, it has been shown that uncertainty-based AL fails at batch-wise AL for vision datasets and CNNs, see https://arxiv.org/abs/1708.00489. Furthermore it only works with an initially trained model and thus labeled dataset. Lightly offers self-supervised learning to learn high dimensional embeddings through its open-source package https://github.com/lightly-ai/lightly. They can be used through our API to choose a diverse subset. Optionally, this sampling can be combined with uncertainty-based AL.
- Lightly – A Python library for self-supervised learning on images
-
Active Learning using Detectron2
You can easily train, embed, and upload a dataset using the lightly Python package. First, we need to install the package. We recommend using pip for this. Make sure you're in a Python3.6+ environment. If you're on Windows you should create a conda environment.
What are some alternatives?
EasyCV - An all-in-one toolkit for computer vision
pytorch-metric-learning - The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
unsupervised-depth-completion-visual-inertial-odometry - Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
simsiam-cifar10 - Code to train the SimSiam model on cifar10 using PyTorch
manydepth - [CVPR 2021] Self-supervised depth estimation from short sequences
byol - Implementation of the BYOL paper.
mmselfsup - OpenMMLab Self-Supervised Learning Toolbox and Benchmark
comma10k - 10k crowdsourced images for training segnets
NeuralRecon - Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral
dino - PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
simplerecon - [ECCV 2022] SimpleRecon: 3D Reconstruction Without 3D Convolutions
byol-pytorch - Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch