Unsupervised-Semantic-Segmentation
mmselfsup
Unsupervised-Semantic-Segmentation | mmselfsup | |
---|---|---|
1 | 5 | |
386 | 3,089 | |
- | 0.8% | |
1.8 | 5.3 | |
almost 2 years ago | 10 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Unsupervised-Semantic-Segmentation
-
Unsupervised semantic segmentation
Check out these unsupervised masks created in exactly such way in this paper. They are nearly perfect
mmselfsup
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
-
Does anyone know how a loss curve like this can happen? Details in comments
For some reason, the loss goes up shaply right at the start and slowly goes back down. I am self-supervised pretraining an image modeling with DenseCL using mmselfsup (https://github.com/open-mmlab/mmselfsup). This shape happened on the Coco-2017 dataset and my custom dataset. As you can see, it happens consistently for different runs. How could the loss increase so sharply and is it indicative of an issue with the training? The loss peaks before the first epoch is finished. Unfortunately, the library does not support validation.
- Defect Detection using RPI
- [D] State-of-the-Art for Self-Supervised (Pre-)Training of CNN architectures (e.g. ResNet)?
- Rebirth! OpenSelfSup is upgraded to MMSelfSup
What are some alternatives?
Unsupervised-Classification - SCAN: Learning to Classify Images without Labels, incl. SimCLR. [ECCV 2020]
anomalib - An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
DA-RetinaNet - Official Detectron2 implementation of DA-RetinaNet of our Image and Vision Computing 2021 work 'An unsupervised domain adaptation scheme for single-stage artwork recognition in cultural sites'
calibrated-backprojection-network - PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)
solo-learn - solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightning
mmagic - OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
DiffCSE - Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"
barlowtwins - Implementation of Barlow Twins paper
PASS - The PASS dataset: pretrained models and how to get the data
Revisiting-Contrastive-SSL - Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
animessl - Train vision models with vissl + illustrated images
mmpretrain - OpenMMLab Pre-training Toolbox and Benchmark