tuning_playbook
Mage
tuning_playbook | Mage | |
---|---|---|
16 | 77 | |
25,209 | 7,131 | |
2.8% | 4.6% | |
4.7 | 9.9 | |
26 days ago | 4 days ago | |
Python | ||
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tuning_playbook
-
When Random Numbers Are Too Random: Low Discrepancy Sequences
These are also called quasirandom numbers. Despite games, another use case is for hyperparameter search for neural networks.
https://github.com/google-research/tuning_playbook?tab=readm...
-
Hyperparameter Optimization for LLMs via Scaling Laws
[2] https://github.com/google-research/tuning_playbook
-
Beyond Automatic Differentiation
Batch size can be used for regularisation, but using it for that will limit training performance. From the Google Research Tuning Playbook:
> The batch size governs the training speed and shouldn't be used to directly tune the validation set performance. Often, the ideal batch size will be the largest batch size supported by the available hardware.
> […]
> As long as all hyperparameters are well-tuned (especially the learning rate and regularization hyperparameters) and the number of training steps is sufficient, the same final performance should be attainable using any batch size (see Shallue et al. 2018).
https://github.com/google-research/tuning_playbook#choosing-...
The ideal case is full-batch with tuneable regularisation, just the hardware gets expensive.
-
Modeling methodology
Regarding tuning params, this is an excellent read: https://github.com/google-research/tuning_playbook
- About the hardware
- I asked an AI to create an Asmongold story and then had another AI generate voice. There it is dude
-
Trending ML repos of the week 📈
3️⃣ google-research/tuning_playbook
- AI全靠偷欧美开源的
- Deep learning tuning playbook
Mage
- FLaNK AI-April 22, 2024
-
A mage on the Hero’s Journey: a fantasy epic on how a startup rose from the ashes
In the coming years, Mage will create a cooperative experience so that developers can build data pipelines with their team and level up together. After that journey, Mage will go on an epic quest to create the 1st open world community experience in the data universe.
-
Data sources episode 2: AWS S3 to Postgres Data Sync using Singer
Link to original blog: https://www.mage.ai/blog/data-sources-ep-2-aws-s3-to-postgres-data-sync-using-singer
-
What are some open-source ML pipeline managers that are easy to use?
I would recommend the following: - https://www.mage.ai/ - https://dagster.io/ - https://www.prefect.io/ - https://metaflow.org/ - https://zenml.io/home
-
Mage Battlegrounds: Craft insights from real-time customer behavior analysis
You're invited to participate in the very first Mage Battlegrounds: Craft insights from real-time customer behavior analysis, a 24-hour virtual hackathon hosted by Shashank Mishra! This data engineering competition will take place on Saturday, April 15, 2023 beginning at 11am (PST). This will be a global event open to all participants who register.
-
Looking for an open-source project
Try this feature: https://github.com/mage-ai/mage-ai/issues/1166
-
Daskqueue: Dask-based distributed task queue
Seeing if we can use it in https://github.com/mage-ai/mage-ai
-
Data Pipeline on a Shoestring
That being said there’s a solid family of services just breaking ground that make the local pipeline deployment easier (check out https://www.mage.ai, which does have a clear path to cloud deployment of locally developed pipes, it just isn’t well documented yet, and also https://www.neuronsphere.io - which doesn’t have a public solution YET (they’re internally testing an alpha) but they built a cloud deployable solution for their paying customers and working to release one for freemium use)
-
Trending ML repos of the week 📈
7️⃣ mage-ai/mage-ai
-
Delta without using Spark
Yes, check out how Mage does it: https://github.com/mage-ai/mage-ai/tree/master/mage_integrations/mage_integrations/destinations/delta_lake_s3
What are some alternatives?
dadaptation - D-Adaptation for SGD, Adam and AdaGrad
dagster - An orchestration platform for the development, production, and observation of data assets.
arb - Arb has been merged into FLINT -- use https://github.com/flintlib/flint/ instead
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
nn-zero-to-hero - Neural Networks: Zero to Hero
vscode-dvc - Machine learning experiment tracking and data versioning with DVC extension for VS Code
ML-Papers-Explained - Explanation to key concepts in ML
sqlmesh - Efficient data transformation and modeling framework that is backwards compatible with dbt.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
mito - The mitosheet package, trymito.io, and other public Mito code.
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
Data-Science-Roadmap - Data Science Roadmap from A to Z