ComboLoss
pytorch-metric-learning
ComboLoss | pytorch-metric-learning | |
---|---|---|
1 | 3 | |
30 | 5,764 | |
- | - | |
3.6 | 7.9 | |
over 3 years ago | about 1 month ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ComboLoss
-
[D] Could this network be used to generate the most attractive image possible? What would it look like... -"ComboLoss for Facial Attractiveness Analysis with Squeeze-and-Excitation Networks"
Abstract: Loss function is crucial for model training and feature representation learning, conventional models usually regard facial attractiveness recognition task as a regression problem, and adopt MSE loss or Huber variant loss as supervision to train a deep convolutional neural network (CNN) to predict facial attractiveness score. Little work has been done to systematically compare the performance of diverse loss functions. In this paper, we firstly systematically analyze model performance under diverse loss functions. Then a novel loss function named ComboLoss is proposed to guide the SEResNeXt50 network. The proposed method achieves state-of-the-art performance on SCUT-FBP, HotOrNot and SCUT-FBP5500 datasets with an improvement of 1.13%, 2.1% and 0.57% compared with prior arts, respectively. Code and models are available at this https URL.
pytorch-metric-learning
-
Similarity Learning lacks a framework. So we built one
Not a full featured framework, but pytorch-metric-learning has data loaders, lossess, etc. to facilitate similarity learning: https://github.com/KevinMusgrave/pytorch-metric-learning
Disclaimer: I've made some contributions to it.
-
[R][D] VAE Embedding Space - Can we force it to learn a metric?
You can use the triplet loss together with the Gaussian prior. It will be zero centered though and the clusters are not as separated when you use the triplet loss only.There are many alternative to the triplet loss, in case it needs to be a metric: https://github.com/KevinMusgrave/pytorch-metric-learning
-
[D] Similar Image Retrieval
This repo provides the tools and examples needed to build such a model: https://github.com/KevinMusgrave/pytorch-metric-learning
What are some alternatives?
d2l-en - Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
dino - PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
jina - ☁️ Build multimodal AI applications with cloud-native stack
lightly - A python library for self-supervised learning on images.
pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs
EasyOCR - Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
byol-pytorch - Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch
autogluon - Fast and Accurate ML in 3 Lines of Code
simsiam-cifar10 - Code to train the SimSiam model on cifar10 using PyTorch
similarity - TensorFlow Similarity is a python package focused on making similarity learning quick and easy.
Transformer-SSL - This is an official implementation for "Self-Supervised Learning with Swin Transformers".
barlowtwins - Implementation of Barlow Twins paper