solo-learn
mmselfsup
solo-learn | mmselfsup | |
---|---|---|
3 | 5 | |
1,358 | 3,095 | |
- | 1.0% | |
5.9 | 5.3 | |
5 days ago | 11 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
solo-learn
- [D]Use SimCLR as pretraining for image segmentation
-
[D] State-of-the-Art for Self-Supervised (Pre-)Training of CNN architectures (e.g. ResNet)?
Hello, I lost a bit of touch to the current SOTA of self-supervised pretraining of CNNs, in particular ResNet. I found this repository https://github.com/vturrisi/solo-learn that has many methods implemented but I'm not really sure where to start. My goal is to pretrain a ResNet backbone on a decently large amount of image data that comes from a certain domain and after that fine-tune it for different downstream tasks (classification, segmentation, object detection) on a subset of the data I have labels for.
-
[P] Solo-learn 1.0.3: new methods, support for transformer architectures, better evaluation, improved docs, and additional results.
Hi Reddit, the solo-learn team is back again with interesting news about its SSL library.
mmselfsup
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
-
Does anyone know how a loss curve like this can happen? Details in comments
For some reason, the loss goes up shaply right at the start and slowly goes back down. I am self-supervised pretraining an image modeling with DenseCL using mmselfsup (https://github.com/open-mmlab/mmselfsup). This shape happened on the Coco-2017 dataset and my custom dataset. As you can see, it happens consistently for different runs. How could the loss increase so sharply and is it indicative of an issue with the training? The loss peaks before the first epoch is finished. Unfortunately, the library does not support validation.
- Defect Detection using RPI
- [D] State-of-the-Art for Self-Supervised (Pre-)Training of CNN architectures (e.g. ResNet)?
- Rebirth! OpenSelfSup is upgraded to MMSelfSup
What are some alternatives?
dino - PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
Unsupervised-Semantic-Segmentation - Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals. [ICCV 2021]
lightning-transformers - Flexible components pairing 🤗 Transformers with :zap: Pytorch Lightning
anomalib - An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
contrastive-reconstruction - Tensorflow-keras implementation for Contrastive Reconstruction (ConRec) : a self-supervised learning algorithm that obtains image representations by jointly optimizing a contrastive and a self-reconstruction loss.
calibrated-backprojection-network - PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)
Revisiting-Contrastive-SSL - Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
mmagic - OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
barlowtwins - Implementation of Barlow Twins paper
CEBRA - Learnable latent embeddings for joint behavioral and neural analysis - Official implementation of CEBRA