AdCo
Unsupervised-Classification
AdCo | Unsupervised-Classification | |
---|---|---|
2 | 2 | |
161 | 1,309 | |
0.0% | - | |
0.9 | 1.4 | |
about 1 year ago | 10 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AdCo
-
[D] Negative examples are still useful in self-supervised learning even after the BYOL, and they are directly trainable end-to-end with a backbone.
The paper showed that with only 8196 negatives, the AdCo can achieve better performance than the SOTA self-supervised methods (MoCo V2, SimCLR, AdCo and SWAV) with fewer epochs, thus making the AdCo a very efficient self-supervised learning algorithm to pretrain a backbone. The source code has been released at https://github.com/maple-research-lab/AdCo.
-
[R] AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries
The source code is available at https://github.com/maple-research-lab/AdCo/. The paper will be presented at CVPR 2021.
Unsupervised-Classification
-
Middle ground dataset between CIFAR and ImageNet [D]
The subsets we used are from here: https://github.com/wvangansbeke/Unsupervised-Classification/tree/master/data/imagenet_subsets
- Any reference or idea about how to train unsupervised CNN model ?
What are some alternatives?
simclr - SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
self-supervised - Whitening for Self-Supervised Representation Learning | Official repository
Unsupervised-Semantic-Segmentation - Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals. [ICCV 2021]
contrastive-reconstruction - Tensorflow-keras implementation for Contrastive Reconstruction (ConRec) : a self-supervised learning algorithm that obtains image representations by jointly optimizing a contrastive and a self-reconstruction loss.
DiffCSE - Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"
Transformer-SSL - This is an official implementation for "Self-Supervised Learning with Swin Transformers".
vectorrvnn - Data Driven method for hierarchical grouping of paths in Vector Graphics.
cs231n
SimMIM - This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".
PASS - The PASS dataset: pretrained models and how to get the data
PaddleHelix - Bio-Computing Platform Featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集
Clover - An Efficient DNA Clustering algorithm based on Tree Structure.