ExpensiveOptimBenchmark
Hyperactive
ExpensiveOptimBenchmark | Hyperactive | |
---|---|---|
1 | 8 | |
19 | 490 | |
- | - | |
3.9 | 7.7 | |
7 months ago | 5 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ExpensiveOptimBenchmark
-
29 Python real world optimization tutorials
For the problems with continous decision variables it is not trivial to come up with faster approaches on a modern many-core CPU. But even with discrete input (scheduling and planning) new continous optimizers can compete. The trick is to utilize parallel optimization runs and numba to perform around 1E6 fitness evaluations each second. Advantage is that it is much easier to create a fitness function than for instance to implement incremental score calculation in Optaplanner. And it is more flexible if you have to handle non-standard problems. For very expensive optimizations (like https://github.com/AlgTUDelft/ExpensiveOptimBenchmark) parallelization of fitness evaluation is more important than to use surrogate models.
Hyperactive
- Hyperactive Version 4.5 Released
- Hyperactive: An optimization and data collection toolbox for AutoML
- Hyperactive: Optimize computationally expensive models with powerful algorithms
- Show HN: Hyperactive – A highly versatile AutoML Toolbox
-
Hyperactive – Easy Neural Architecture Search for Deep Learning in Python
Check out the Neural Architecture Search Tutorial here: https://nbviewer.jupyter.org/github/SimonBlanke/hyperactive-...
Neural Architecture Search is just one of many optimization applications you can work on with Hyperactive. Check out the examples in the official github repository: https://github.com/SimonBlanke/Hyperactive/tree/master/examp...
-
Gradient-Free-Optimizers A collection of modern optimization methods in Python
Gradient-Free-Optimizers is a lightweight optimization package that serves as a backend for Hyperactive: https://github.com/SimonBlanke/Hyperactive
Hyperactive can do parallel computing with multiprocessing or joblib, or a custom wrapper-function.
What are some alternatives?
fast-cma-es - A Python 3 gradient-free optimization library
mango - Parallel Hyperparameter Tuning in Python
parmoo - Python library for parallel multiobjective simulation optimization
pybobyqa - Python-based Derivative-Free Optimization with Bound Constraints
opytimizer - 🐦 Opytimizer is a Python library consisting of meta-heuristic optimization algorithms.
OpenMetadata - Open Standard for Metadata. A Single place to Discover, Collaborate and Get your data right.
optuna-examples - Examples for https://github.com/optuna/optuna
optimization-tutorial - Tutorials for the optimization techniques used in Gradient-Free-Optimizers and Hyperactive.
anovos - Anovos - An Open Source Library for Scalable feature engineering Using Apache-Spark
Auto_ViML - Automatically Build Multiple ML Models with a Single Line of Code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.
Gradient-Free-Optimizers - Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
bytehub - ByteHub: making feature stores simple