contrastive-reconstruction
mmselfsup
contrastive-reconstruction | mmselfsup | |
---|---|---|
1 | 5 | |
13 | 3,169 | |
- | 0.0% | |
10.0 | 5.3 | |
almost 3 years ago | over 1 year ago | |
Python | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
contrastive-reconstruction
-
[D]Use SimCLR as pretraining for image segmentation
How about this: https://github.com/bayer-science-for-a-better-life/contrastive-reconstruction
mmselfsup
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
-
Does anyone know how a loss curve like this can happen? Details in comments
For some reason, the loss goes up shaply right at the start and slowly goes back down. I am self-supervised pretraining an image modeling with DenseCL using mmselfsup (https://github.com/open-mmlab/mmselfsup). This shape happened on the Coco-2017 dataset and my custom dataset. As you can see, it happens consistently for different runs. How could the loss increase so sharply and is it indicative of an issue with the training? The loss peaks before the first epoch is finished. Unfortunately, the library does not support validation.
- Defect Detection using RPI
- [D] State-of-the-Art for Self-Supervised (Pre-)Training of CNN architectures (e.g. ResNet)?
- Rebirth! OpenSelfSup is upgraded to MMSelfSup
What are some alternatives?
MAGIST-Algorithm - Multi-Agent Generally Intelligent Simultaneous Training Algorithm for Project Zeta
Unsupervised-Semantic-Segmentation - Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals. [ICCV 2021]
Unsupervised-Classification - SCAN: Learning to Classify Images without Labels, incl. SimCLR. [ECCV 2020]
calibrated-backprojection-network - PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)
solo-learn - solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightning
anomalib - An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
vissl - VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.
mmagic - OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
self_supervised - Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.
Revisiting-Contrastive-SSL - Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
animessl - Train vision models with vissl + illustrated images
mmpretrain - OpenMMLab Pre-training Toolbox and Benchmark