tune-sklearn
guildai
tune-sklearn | guildai | |
---|---|---|
4 | 16 | |
462 | 856 | |
- | 0.1% | |
0.0 | 8.8 | |
6 months ago | 9 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tune-sklearn
-
LightGBM vs. XGBoost: Which distributed version is faster?
Of course not! :)
The Ray ecosystem is actually chalk full of integrations, from XGBoost Ray (https://docs.ray.io/en/master/xgboost-ray.html), to PyTorch on Ray (https://docs.ray.io/en/master/using-ray-with-pytorch.html), and of course hyperparameter search with Ray Tune for a variety of libraries, including Sklearn (https://github.com/ray-project/tune-sklearn).
-
[D] I'm new and scrappy. What tips do you have for better logging and documentation when training or hyperparameter training?
If you mainly use scikit-learn, you should consider using tune-sklearn.
-
[P] Bayesian Hyperparameter Optimization with tune-sklearn in PyCaret
Just wanted to share a not widely known feature of PyCaret. By default, PyCaret's tune_model uses the tried and tested RandomizedSearchCV from scikit-learn. However, not everyone knows about the various advanced options tune_model() currently allows you to use such as cutting edge hyperparameter tuning techniques like Bayesian Optimization through libraries such as tune-sklearn, Hyperopt, and Optuna.
-
[D] Here are 3 ways to Speed Up Scikit-Learn - Any suggestions?
You might want to try out tune-sklearn as it seems like it works for catboost as well. I am trying it use tune-sklearn to speed up my scikit-learn hyperparameter tuning.
guildai
-
guildai VS cascade - a user suggested alternative
2 projects | 5 Dec 2023
-
[D] Who here are convinced that they have a really good setup that keeps track of their ML experiments?
Experiment tracking in DvC is implemented using git to store snapshots of a project and related artifacts. You might take a look at Guild AI's support for DvC, which is tightly integrated with DvC stages. You can run any of the stages defined for a project and you get a properly isolated run (each run is a project copy to ensure that you're not corrupting the run if you modify files while it's running - as well as properly supporting concurrent runs). Once you have runs in Guild, you can use any number of tools to study, compare, export, etc.
-
[D] Deploying SOTA models into my own projects
I built an experiment tracking tool (Guild AI) that focuses on code/model reuse and so this question is dear to my heart :) Best of luck!
-
[P] I reviewed 50+ open-source MLOps tools. Here’s the result
I'm not aware of experiment tracking in Jupyter notebooks themselves. Guild AI is able to run notebooks as experiments however.
-
[D] What MLOps platform do you use, and how helpful are they?
Disclosure - I'm the author of Guild AI so take this for the biased opinion that it is.
-
[N] Experiment tracking with DvC and Guild AI
I'm the author of Guild AI (open source experiment tracking). For some time now Guild users have asked for DvC support. This is now available as a pre-release.
-
[D] Why doesn’t your team use an experiment tracking tool?
Guild AI now has support for running DvC stages as experiments. DvC uses git under the covers to manage project state for each experiment, along with the experiment results. Guild doesn't touch your git repo and instead copies your project source to a new run directory. This ensures that you have a correct record of your experiment without churning your project state.
-
Data Science toolset summary from 2021
Guild.ai - https://guild.ai/
- [D] How do you ensure reproducibility?
-
[D] I'm new and scrappy. What tips do you have for better logging and documentation when training or hyperparameter training?
Use guild and pytorch-lightning. Make it easy for new contributors to get your data by using dvc as a data access tool.
What are some alternatives?
auto-sklearn - Automated Machine Learning with scikit-learn
MLflow - Open source platform for the machine learning lifecycle
hummingbird - Hummingbird compiles trained ML models into tensor computation for faster inference.
aim - Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.
dvc - 🦉 ML Experiments and Data Management with Git
spock - spock is a framework that helps manage complex parameter configurations during research and development of Python applications
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
labml - 🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱
wandb - 🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.
Sacred - Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA.