pytorch-optimizer
AdasOptimizer
pytorch-optimizer | AdasOptimizer | |
---|---|---|
3 | 2 | |
2,946 | 85 | |
- | - | |
3.1 | 5.8 | |
about 1 month ago | over 3 years ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytorch-optimizer
-
[D]: Implementation: Deconvolutional Paragraph Representation Learning
The specific implementation is from (here)[https://github.com/jettify/pytorch-optimizer] since pytorch doesn't have it directly.
- VQGAN+CLIP : "RAdam" from torch_optimizer could not be imported ?
- [R] AdasOptimizer Update: Cifar-100+MobileNetV2 Adas generalizes with Adas 15% better and 9x faster than Adam
AdasOptimizer
-
[R] AdasOptimizer Update: Cifar-100+MobileNetV2 Adas generalizes with Adas 15% better and 9x faster than Adam
I think too, I was comfortable with posting this because it was the same code for each optimizer. https://github.com/YanaiEliyahu/AdasOptimizer/blob/master/misc/cifar-100-mobilenetv2/model_with_training.py.txt if you care to find what I did wrong, go for it.
- Optimizer obsoletes step-size scheduling, 100% on MNIST's training set 11 epochs
What are some alternatives?
sam - SAM: Sharpness-Aware Minimization (PyTorch)
ML-Optimizers-JAX - Toy implementations of some popular ML optimizers using Python/JAX
DemonRangerOptimizer - Quasi Hyperbolic Rectified DEMON Adam/Amsgrad with AdaMod, Gradient Centralization, Lookahead, iterative averaging and decorrelated Weight Decay
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
imagenette - A smaller subset of 10 easily classified classes from Imagenet, and a little more French
tensorflow - An Open Source Machine Learning Framework for Everyone
simple-sam - Sharpness-Aware Minimization for Efficiently Improving Generalization
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
RAdam - On the Variance of the Adaptive Learning Rate and Beyond
Caffe - Caffe: a fast open framework for deep learning.