AdamP
sam
AdamP | sam | |
---|---|---|
1 | 3 | |
409 | 1,651 | |
1.5% | - | |
0.0 | 0.0 | |
over 3 years ago | 2 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AdamP
-
[R] A theoretical reviewing video on "AdamP: Slowing Down the Slowdown for momentum optimizers (ICLR 2021)"
Paper: https://openreview.net/forum?id=Iz3zU3M316D Code: https://github.com/clovaai/AdamP Project page: https://clovaai.github.io/AdamP/
sam
-
What is the correct way to sum loss into a total loss and then to backprop?
Which from here I understand that I shouldn't use the same loss variable for both forward passes but I'm not sure how else to do this. I thought that I could maybe create a variable called total_loss and add the loss to it and then after the iterations to backprop over it, but I'm not sure if that's the correct approach.
-
[R] Sharpness-Aware Minimization for Efficiently Improving Generalization
They reached sota on a few tasks. Do you really believe that the entire community missed the magic hyperparameters of batch size 128 and Adam to beat SOTA? I think getting SOTA really solidifies the approach, albeit the 2x speed cost seems heavy. As for implementation, it looks fairly trivial to adapt to all optimizers, at least from this random github https://github.com/davda54/sam
-
Help me implement this paper expanding on Google's SAM optimizer
Here is the code for SAM. SAM isn't too complicated. There are two forward and backward passes, a gradient accent after the first one, and the gradient decent after the second. The gradient accent is to get the noised SAM model which is calculated as for each p in param group add epsilon, which is rho * (p.grad / grad_norm), with rho being SAM's only hyperparameter.
What are some alternatives?
horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
pytorch-optimizer - torch-optimizer -- collection of optimizers for Pytorch
Adan - Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
southpaw - Python Fanduel API (2023) - Lineup Automation
OASIS - Official implementation of the paper "You Only Need Adversarial Supervision for Semantic Image Synthesis" (ICLR 2021)
PHP Documentor 3 - Documentation Generator for PHP
EasyOCR - Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
simple-sam - Sharpness-Aware Minimization for Efficiently Improving Generalization
yolov5 - YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
RAdam - On the Variance of the Adaptive Learning Rate and Beyond
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
ContraD - Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)