spaceopt
Spearmint
spaceopt | Spearmint | |
---|---|---|
1 | 2 | |
42 | 1,529 | |
- | 0.0% | |
3.6 | 0.0 | |
almost 2 years ago | over 4 years ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spaceopt
-
[D] What kind of Hyperparameter Optimisation do you use?
I had exactly the same problem: How do I naturally integrate human expertise or manual interventions into automatic hyperparameter optimization process? I ended up writing my own algorithm that helps me achieve that: https://github.com/ar-nowaczynski/spaceopt
Spearmint
-
Why do tree-based models still outperform deep learning on tabular data?
It occurs to me that a system, trained on peer-reviewed applied-machine-learning literature and Kaggle winners, that generates candidates for structured feature-engineering specifications, based on plaintext descriptions of columns' real-world meaning, should be considered a requisite part of the "meta" here.
Ah, and then you could iterate within the resulting feature-engineering-suggestion space as a hyper-parameter between experiments, which could be optimized with e.g. https://github.com/HIPS/Spearmint . The papers write themselves!
-
[D] What kind of Hyperparameter Optimisation do you use?
This was some time ago but I had some promising results with Bayesian optimization using a Gaussian Process prior. The method was developed by the guys who wrote Spearmint. That library doesn't support parallelization but I implemented the same technique in Scala without too much difficulty.
What are some alternatives?
optuna - A hyperparameter optimization framework
optuna-examples - Examples for https://github.com/optuna/optuna
srbench - A living benchmark framework for symbolic regression
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
axe-testcafe - The helper for using Axe in TestCafe tests
Hyperactive - An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.
yggdrasil-decision-forests - A library to train, evaluate, interpret, and productionize decision forest models such as Random Forest and Gradient Boosted Decision Trees.
nni - An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
youtube-react - A Youtube clone built in React, Redux, Redux-saga
decision-forests - A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
higgs-logistic-regression