botorch
noisy-bayesian-optimization
Our great sponsors
botorch | noisy-bayesian-optimization | |
---|---|---|
5 | 1 | |
2,949 | 17 | |
1.5% | - | |
9.4 | 0.0 | |
5 days ago | over 2 years ago | |
Jupyter Notebook | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
botorch
-
botorch VS SMT - a user suggested alternative
2 projects | 6 Dec 2023
- BoTorch – Bayesian Optimization in PyTorch
-
[D] Uncertainty estimation with calibration set (with MC Dropout)
The true answer for this is modelling the problem bayesian in the first place using, for example, https://botorch.org/ and https://gpytorch.ai/.
-
Bayesian Optimization Book
Yes, I'm using a binary outcome, since that's what I get from playing a game. To get probabilities I'd have to play a lot of games with the same settings/features/point and take the mean, but it seems that defeats the point of Bayesian optimization finding the best point to evaluate for each iteration.
The SPSA method seems to work quite well with binary outcomes. This is what I was trying to beat. Unfortunately I was never able to converge faster than SPSA (or even close to that) even increasing the number of samples.
I got some feedback form the botorch team back then: https://github.com/pytorch/botorch/issues/347#:~:text=thomas...
noisy-bayesian-optimization
-
Bayesian Optimization Book
I spent a long time trying to implement Noisy Baysian Optimization [1], using both standard libraries and my own understanding, but ultimately I never got it to work very well.
It's a real pity, since a smart optimizer for very noisy functions would be really useful. I was trying to use it for chess engine tuning, since I know Deep Mind used it for tuning AlphaZero. I really wonder how they got it to work well.
[1] https://github.com/thomasahle/noisy-bayesian-optimization
What are some alternatives?
stat_rethinking_2022 - Statistical Rethinking course winter 2022
Ax - Adaptive Experimentation Platform
smt - Surrogate Modeling Toolbox
optimas - Optimization at scale, powered by libEnsemble