Our great sponsors
-
Gradient-Free-Optimizers
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
Hyperactive
An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.
-
optimization-tutorial
Tutorials for the optimization techniques used in Gradient-Free-Optimizers and Hyperactive.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I would be very disappointed if that were the case.. no, it looks like it’s set up to capture variance. The BO algo wraps an “Expected Improvement Optimizer”:
https://github.com/SimonBlanke/Gradient-Free-Optimizers/blob...
Which selects new points based on both the model’s mean estimate and its variance. See around line 58
There's also Opytimizer [0] for almost every metaheuristic optimization algorithm under the Sun.
[0] https://github.com/gugarosa/opytimizer
Gradient-Free-Optimizers is a lightweight optimization package that serves as a backend for Hyperactive: https://github.com/SimonBlanke/Hyperactive
Hyperactive can do parallel computing with multiprocessing or joblib, or a custom wrapper-function.
I've used this, and it works nicely: https://github.com/numericalalgorithmsgroup/pybobyqa. I'd be happy if it were added to your project, then I could just use yours and have access to a bunch of alternatives with the same API.
I will look into this algorithm. Thanks for the suggestion. I have some basic explanations of the optimization techniques and their parameters in a separate repository: https://github.com/SimonBlanke/optimization-tutorial
But there is still a lot of work to be done.
If you have an expensive but not high dimensional problem you might want to try https://github.com/DLR-SC/sqpdfo .
This looks super interesting, I have previously considered using the Bayesian Optimization[0] package for some work, but the ability to switch out the underlying algorithms is appealing.
Perhaps a bit of a far out question - I would be interested in using this for optimizing real-world (ie slow, expensive, noisy) processes. A caveat with this is that the work is done in batches (eg N experiments at a time). Is there a mechanism by which I could feed in my results from previous rounds and have the algorithm suggest the next N configurations that are sufficiently uncorrelated to explore promising space without bunching on top of each-other? My immediate read is that I could use the package to pick the next optimal point, but would then have to lean on a random search for the remainder of the batch?
0: https://github.com/fmfn/BayesianOptimization
Yes it is quite easy to switch algorithms via the "gpr" parameter. You just have to write a wrapper class. I am currently working on a repository that discusses how to do that in detail: https://github.com/SimonBlanke/surrogate-models