mmselfsup
mmagic
mmselfsup | mmagic | |
---|---|---|
5 | 5 | |
3,212 | 7,038 | |
1.3% | 1.2% | |
5.3 | 6.8 | |
over 1 year ago | 6 months ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mmselfsup
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
-
Does anyone know how a loss curve like this can happen? Details in comments
For some reason, the loss goes up shaply right at the start and slowly goes back down. I am self-supervised pretraining an image modeling with DenseCL using mmselfsup (https://github.com/open-mmlab/mmselfsup). This shape happened on the Coco-2017 dataset and my custom dataset. As you can see, it happens consistently for different runs. How could the loss increase so sharply and is it indicative of an issue with the training? The loss peaks before the first epoch is finished. Unfortunately, the library does not support validation.
- Defect Detection using RPI
- [D] State-of-the-Art for Self-Supervised (Pre-)Training of CNN architectures (e.g. ResNet)?
- Rebirth! OpenSelfSup is upgraded to MMSelfSup
mmagic
- More than Editing, Unlock the Magic!
-
MMEditing v1.0.0rc4 has been released (including Disco-Diffusion)
Join us to make it better! Try at https://github.com/open-mmlab/mmediting/tree/1.x
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
MMEditing: OpenMMLab image and video editing toolbox.
What are some alternatives?
calibrated-backprojection-network - PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)
contrastive-unpaired-translation - Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
anomalib - An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
Deep-Exemplar-based-Video-Colorization - The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
Unsupervised-Semantic-Segmentation - Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals. [ICCV 2021]
facexlib - FaceXlib aims at providing ready-to-use face-related functions based on current STOA open-source methods.
Revisiting-Contrastive-SSL - Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
cnn-watermark-removal - Fully convolutional deep neural network to remove transparent overlays from images
mmdeploy - OpenMMLab Model Deployment Framework
Real-ESRGAN-colab - A Real-ESRGAN model trained on a custom dataset
SparK - [ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
a-PyTorch-Tutorial-to-Super-Resolution - Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network | a PyTorch Tutorial to Super-Resolution