tuning_playbook
ML-Papers-Explained
tuning_playbook | ML-Papers-Explained | |
---|---|---|
16 | 4 | |
25,209 | 6,710 | |
2.8% | 0.8% | |
4.7 | 8.8 | |
26 days ago | 5 days ago | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tuning_playbook
-
When Random Numbers Are Too Random: Low Discrepancy Sequences
These are also called quasirandom numbers. Despite games, another use case is for hyperparameter search for neural networks.
https://github.com/google-research/tuning_playbook?tab=readm...
-
Hyperparameter Optimization for LLMs via Scaling Laws
[2] https://github.com/google-research/tuning_playbook
-
Beyond Automatic Differentiation
Batch size can be used for regularisation, but using it for that will limit training performance. From the Google Research Tuning Playbook:
> The batch size governs the training speed and shouldn't be used to directly tune the validation set performance. Often, the ideal batch size will be the largest batch size supported by the available hardware.
> […]
> As long as all hyperparameters are well-tuned (especially the learning rate and regularization hyperparameters) and the number of training steps is sufficient, the same final performance should be attainable using any batch size (see Shallue et al. 2018).
https://github.com/google-research/tuning_playbook#choosing-...
The ideal case is full-batch with tuneable regularisation, just the hardware gets expensive.
-
Modeling methodology
Regarding tuning params, this is an excellent read: https://github.com/google-research/tuning_playbook
- About the hardware
- I asked an AI to create an Asmongold story and then had another AI generate voice. There it is dude
-
Trending ML repos of the week 📈
3️⃣ google-research/tuning_playbook
- AI全靠偷欧美开源的
- Deep learning tuning playbook
ML-Papers-Explained
What are some alternatives?
dadaptation - D-Adaptation for SGD, Adam and AdaGrad
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
arb - Arb has been merged into FLINT -- use https://github.com/flintlib/flint/ instead
awesome-chatgpt-prompts - This repo includes ChatGPT prompt curation to use ChatGPT better.
nn-zero-to-hero - Neural Networks: Zero to Hero
From-0-to-Research-Scientist-resources-guide - Detailed and tailored guide for undergraduate students or anybody want to dig deep into the field of AI with solid foundation.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
ChatGPT - 🔮 ChatGPT Desktop Application (Mac, Windows and Linux)