SMAC3
SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization (by automl)
optuna
A hyperparameter optimization framework (by optuna)
SMAC3 | optuna | |
---|---|---|
2 | 34 | |
1,009 | 9,681 | |
2.4% | 2.2% | |
3.2 | 9.9 | |
11 days ago | 5 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SMAC3
Posts with mentions or reviews of SMAC3.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-08-12.
-
[D]How to optimize an ANN?
You can use Optuna, SMAC or hyperopt
-
Finding the optimal parameter
Apart from the aforementioned comments noting that this is an optimization problem, ready-to-use python libraries for this kind of problem (accounting for evaluation time) include http://hyperopt.github.io/hyperopt/, https://github.com/automl/SMAC3, or https://www.ray.io/ray-tune
optuna
Posts with mentions or reviews of optuna.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-06.
-
Optuna – A Hyperparameter Optimization Framework
I didn’t even know WandB did hyperparameter optimization, I figured it was a neural network visualizer based on 2 minute papers. Didn’t seem like many alternatives out there to Optuna with TPE + persistence in conditional continuous & discrete spaces.
Anyway, it’s doable to make a multi objective decide_to_prune function with Optuna, here’s an example https://github.com/optuna/optuna/issues/3450#issuecomment-19...
- How to test optimal parameters
- FOSS hyperparameter optimization framework to automate hyperparameter search
-
How did you make that?!
The network configuration process is usually not particularly scientific and mostly relies on empirical observation. For some cases, tools like Optuna can be used to automatically find the optimal parameters. In others, on others, you can look for modern studies which explore the effect of this parameter on performance, such as this study (2022), but these are typically very specific to one particular architecture.
-
[P] We are building a curated list of open source tooling for data-centric AI workflows, looking for contributions.
Keras Tuner, Optuna : https://github.com/optuna/optuna ?
- How to tune more than 2 hyperparameters in Grid Search in Python?
-
Suggestion to optimize algo
I have used OpenTuner, but I don't think it is maintained anymore. I hear tell that Optuna is what to use now, but have not used it myself. https://optuna.org Optuna - A hyperparameter optimization framework
-
Best practices for training PyTorch model
Research the type of model to get an idea of what hyper parameters to use. I recommend using a hyper parameter optimization library like Optuna to get the best configuration
-
[D]How to optimize an ANN?
You can use Optuna, SMAC or hyperopt
What are some alternatives?
When comparing SMAC3 and optuna you can also consider the following projects:
hyperopt - Distributed Asynchronous Hyperparameter Optimization in Python
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.