Best-Deep-Learning-Optimizers
Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitable (by lessw2020)
RAdam
On the Variance of the Adaptive Learning Rate and Beyond (by LiyuanLucasLiu)
Best-Deep-Learning-Optimizers | RAdam | |
---|---|---|
1 | 4 | |
202 | 2,520 | |
- | - | |
0.0 | 0.0 | |
about 3 years ago | almost 3 years ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Best-Deep-Learning-Optimizers
Posts with mentions or reviews of Best-Deep-Learning-Optimizers.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-08-01.
-
Why is my loss choppy?
Check the initial description if you are interested: https://github.com/lessw2020/Best-Deep-Learning-Optimizers
RAdam
Posts with mentions or reviews of RAdam.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-12-19.
-
[D] Why does a sudden increase in accuracy at a specific epoch in these model
Code for https://arxiv.org/abs/1908.03265 found: https://github.com/LiyuanLucasLiu/RAdam
-
[D] How to pick a learning rate scheduler?
common practice is to include some type of annealing (cosine, linear, etc.), which makes intuitive sense. for adam/adamw, it's generally a good idea to include a warmup in the lr schedule, as the gradient distribution without the warmup can be distorted, leading to the optimizer being trapped in a bad local min. see this paper. there are also introduced in this paper and subsequent works (radam, ranger, and variants) that don't require a warmup stage to stabilize the gradients. i would say in general, if you're using adam/adamw, include a warmup and some annealing, either linear or cosine. if you're using radam/ranger/variants, you can skip the warmup. how many steps to use for warmup/annealing are probably problem specific, and require some hyperparam tuning to get optimimal results
- Why is my loss choppy?
What are some alternatives?
When comparing Best-Deep-Learning-Optimizers and RAdam you can also consider the following projects:
ML-Optimizers-JAX - Toy implementations of some popular ML optimizers using Python/JAX
AdaBound - An optimizer that trains as fast as Adam and as good as SGD.
pytorch_warmup - Learning Rate Warmup in PyTorch
pytorch-optimizer - torch-optimizer -- collection of optimizers for Pytorch
DemonRangerOptimizer - Quasi Hyperbolic Rectified DEMON Adam/Amsgrad with AdaMod, Gradient Centralization, Lookahead, iterative averaging and decorrelated Weight Decay
deepnet - Educational deep learning library in plain Numpy.
sam - SAM: Sharpness-Aware Minimization (PyTorch)