sam
pytorch-optimizer
sam | pytorch-optimizer | |
---|---|---|
3 | 3 | |
1,655 | 2,950 | |
- | - | |
0.0 | 3.1 | |
3 months ago | about 2 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sam
-
What is the correct way to sum loss into a total loss and then to backprop?
Which from here I understand that I shouldn't use the same loss variable for both forward passes but I'm not sure how else to do this. I thought that I could maybe create a variable called total_loss and add the loss to it and then after the iterations to backprop over it, but I'm not sure if that's the correct approach.
-
[R] Sharpness-Aware Minimization for Efficiently Improving Generalization
They reached sota on a few tasks. Do you really believe that the entire community missed the magic hyperparameters of batch size 128 and Adam to beat SOTA? I think getting SOTA really solidifies the approach, albeit the 2x speed cost seems heavy. As for implementation, it looks fairly trivial to adapt to all optimizers, at least from this random github https://github.com/davda54/sam
-
Help me implement this paper expanding on Google's SAM optimizer
Here is the code for SAM. SAM isn't too complicated. There are two forward and backward passes, a gradient accent after the first one, and the gradient decent after the second. The gradient accent is to get the noised SAM model which is calculated as for each p in param group add epsilon, which is rho * (p.grad / grad_norm), with rho being SAM's only hyperparameter.
pytorch-optimizer
-
[D]: Implementation: Deconvolutional Paragraph Representation Learning
The specific implementation is from (here)[https://github.com/jettify/pytorch-optimizer] since pytorch doesn't have it directly.
- VQGAN+CLIP : "RAdam" from torch_optimizer could not be imported ?
- [R] AdasOptimizer Update: Cifar-100+MobileNetV2 Adas generalizes with Adas 15% better and 9x faster than Adam
What are some alternatives?
southpaw - Python Fanduel API (2023) - Lineup Automation
DemonRangerOptimizer - Quasi Hyperbolic Rectified DEMON Adam/Amsgrad with AdaMod, Gradient Centralization, Lookahead, iterative averaging and decorrelated Weight Decay
AdamP - AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights (ICLR 2021)
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
PHP Documentor 3 - Documentation Generator for PHP
imagenette - A smaller subset of 10 easily classified classes from Imagenet, and a little more French
simple-sam - Sharpness-Aware Minimization for Efficiently Improving Generalization
RAdam - On the Variance of the Adaptive Learning Rate and Beyond
PythonPID_Tuner - Python PID Tuner - Based on a FOPDT model obtained using a Open Loop Process Reaction Curve
AdasOptimizer - ADAS is short for Adaptive Step Size, it's an optimizer that unlike other optimizers that just normalize the derivative, it fine-tunes the step size, truly making step size scheduling obsolete, achieving state-of-the-art training performance