tuning_playbook
A playbook for systematically maximizing the performance of deep learning models. (by fzyzcjy)
dadaptation
D-Adaptation for SGD, Adam and AdaGrad (by facebookresearch)
tuning_playbook | dadaptation | |
---|---|---|
1 | 5 | |
3 | 487 | |
- | 1.0% | |
10.0 | 5.6 | |
over 1 year ago | 7 months ago | |
Python | ||
GNU General Public License v3.0 or later | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tuning_playbook
Posts with mentions or reviews of tuning_playbook.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-01-20.
-
[D] "Deep Learning Tuning Playbook" (recently released by Google Brain people)
https://github.com/fzyzcjy/tuning_playbook Indeed I have done that - remove all click to expand. (I did it in order to print a pdf)
dadaptation
Posts with mentions or reviews of dadaptation.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-24.
-
D-Adaptation: Goodbye Learning Rate Headaches? (link in comments)
Just about a month ago, Facebook research published a paper called “Learning-Rate-Free Learning by D-Adaptation” (link) along with the code implementation (link). The paper is very technical but still worth the read regardless of your level. However, what it promises to deliver sounds very exciting and could save a lot of time spent on searching optimal parameters for different datasets and tasks:
-
Has anyone tried Facebook's learning-rate-free optimizer for Reinforcement Learning?
D-Adaption - https://github.com/facebookresearch/dadaptation
- Find Optimal Learning Rates for Stable Diffusion Fine-tunes (Link in Comments)
-
[R] Learning-Rate-Free Learning by D-Adaptation
Found relevant code at https://github.com/facebookresearch/dadaptation + all code implementations here
-
[D] "Deep Learning Tuning Playbook" (recently released by Google Brain people)
I tried out facebook's new learning-rate free version of Adam for a swin model I'm working on and it worked a little bit better than the best version of AdamW I found with a learning-rate sweep. https://github.com/facebookresearch/dadaptation
What are some alternatives?
When comparing tuning_playbook and dadaptation you can also consider the following projects:
tuning_playbook - A playbook for systematically maximizing the performance of deep learning models.
Adan - Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models