ComboLoss
d2l-en
ComboLoss | d2l-en | |
---|---|---|
1 | 6 | |
30 | 21,704 | |
- | 1.3% | |
3.6 | 8.5 | |
over 3 years ago | 10 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ComboLoss
-
[D] Could this network be used to generate the most attractive image possible? What would it look like... -"ComboLoss for Facial Attractiveness Analysis with Squeeze-and-Excitation Networks"
Abstract: Loss function is crucial for model training and feature representation learning, conventional models usually regard facial attractiveness recognition task as a regression problem, and adopt MSE loss or Huber variant loss as supervision to train a deep convolutional neural network (CNN) to predict facial attractiveness score. Little work has been done to systematically compare the performance of diverse loss functions. In this paper, we firstly systematically analyze model performance under diverse loss functions. Then a novel loss function named ComboLoss is proposed to guide the SEResNeXt50 network. The proposed method achieves state-of-the-art performance on SCUT-FBP, HotOrNot and SCUT-FBP5500 datasets with an improvement of 1.13%, 2.1% and 0.57% compared with prior arts, respectively. Code and models are available at this https URL.
d2l-en
- which book to chose for deep learning :lan Goodfellow or francois chollet
- d2l-en: Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 400 universities from 60 countries including Stanford, MIT, Harvard, and Cambridge.
-
How to pre-train BERT on different objective tasks using HuggingFace
There might is bert library for pre-train bert model in huggingface, But I suggestion that you train bert model in native pytorch to understand detail, Limu's course is recommended for you
-
The Transformer in Machine Translation
GitHub's article on Dive into Deep Learning
- D2l-En
-
I created a way to learn machine learning through Jupyter
There are actually some online books and courses built on Jupyter Notebook ([Dive to Deep Learning Book](https://github.com/d2l-ai/d2l-en) for example). However yours is more detail and could really helps beginners.
What are some alternatives?
pytorch-metric-learning - The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
Pytorch-UNet - PyTorch implementation of the U-Net for image semantic segmentation with high quality images
jina - ☁️ Build multimodal AI applications with cloud-native stack
DeepADoTS - Repository of the paper "A Systematic Evaluation of Deep Anomaly Detection Methods for Time Series".
pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs
TF-Watcher - Monitor your ML jobs on mobile devices📱, especially for Google Colab / Kaggle
99-ML-Learning-Projects - A list of 99 machine learning projects for anyone interested to learn from coding and building projects
imbalanced-regression - [ICML 2021, Long Talk] Delving into Deep Imbalanced Regression
petastorm - Petastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet format. It supports ML frameworks such as Tensorflow, Pytorch, and PySpark and can be used from pure Python code.
einops - Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
learning-topology-synthetic-data - Tensorflow implementation of Learning Topology from Synthetic Data for Unsupervised Depth Completion (RAL 2021 & ICRA 2021)
ssd_keras - A Keras port of Single Shot MultiBox Detector