sam
RAdam
sam | RAdam | |
---|---|---|
3 | 4 | |
1,655 | 2,520 | |
- | - | |
0.0 | 0.0 | |
3 months ago | almost 3 years ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sam
-
What is the correct way to sum loss into a total loss and then to backprop?
Which from here I understand that I shouldn't use the same loss variable for both forward passes but I'm not sure how else to do this. I thought that I could maybe create a variable called total_loss and add the loss to it and then after the iterations to backprop over it, but I'm not sure if that's the correct approach.
-
[R] Sharpness-Aware Minimization for Efficiently Improving Generalization
They reached sota on a few tasks. Do you really believe that the entire community missed the magic hyperparameters of batch size 128 and Adam to beat SOTA? I think getting SOTA really solidifies the approach, albeit the 2x speed cost seems heavy. As for implementation, it looks fairly trivial to adapt to all optimizers, at least from this random github https://github.com/davda54/sam
-
Help me implement this paper expanding on Google's SAM optimizer
Here is the code for SAM. SAM isn't too complicated. There are two forward and backward passes, a gradient accent after the first one, and the gradient decent after the second. The gradient accent is to get the noised SAM model which is calculated as for each p in param group add epsilon, which is rho * (p.grad / grad_norm), with rho being SAM's only hyperparameter.
RAdam
-
[D] Why does a sudden increase in accuracy at a specific epoch in these model
Code for https://arxiv.org/abs/1908.03265 found: https://github.com/LiyuanLucasLiu/RAdam
-
[D] How to pick a learning rate scheduler?
common practice is to include some type of annealing (cosine, linear, etc.), which makes intuitive sense. for adam/adamw, it's generally a good idea to include a warmup in the lr schedule, as the gradient distribution without the warmup can be distorted, leading to the optimizer being trapped in a bad local min. see this paper. there are also introduced in this paper and subsequent works (radam, ranger, and variants) that don't require a warmup stage to stabilize the gradients. i would say in general, if you're using adam/adamw, include a warmup and some annealing, either linear or cosine. if you're using radam/ranger/variants, you can skip the warmup. how many steps to use for warmup/annealing are probably problem specific, and require some hyperparam tuning to get optimimal results
- Why is my loss choppy?
What are some alternatives?
pytorch-optimizer - torch-optimizer -- collection of optimizers for Pytorch
ML-Optimizers-JAX - Toy implementations of some popular ML optimizers using Python/JAX
southpaw - Python Fanduel API (2023) - Lineup Automation
AdaBound - An optimizer that trains as fast as Adam and as good as SGD.
AdamP - AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights (ICLR 2021)
pytorch_warmup - Learning Rate Warmup in PyTorch
PHP Documentor 3 - Documentation Generator for PHP
simple-sam - Sharpness-Aware Minimization for Efficiently Improving Generalization
DemonRangerOptimizer - Quasi Hyperbolic Rectified DEMON Adam/Amsgrad with AdaMod, Gradient Centralization, Lookahead, iterative averaging and decorrelated Weight Decay
Best-Deep-Learning-Optimizers - Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitable
deepnet - Educational deep learning library in plain Numpy.