stat_rethinking_2022
botorch
Our great sponsors
stat_rethinking_2022 | botorch | |
---|---|---|
13 | 5 | |
4,101 | 2,928 | |
- | 1.5% | |
1.8 | 9.4 | |
about 2 years ago | 3 days ago | |
R | Jupyter Notebook | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stat_rethinking_2022
-
Is there another way to determine the effect of the features other than the inbuilt features importance and SHAP values? [Research] [Discussion]
I would recommend the lectures and book of Statistics Rethinking: https://github.com/rmcelreath/stat_rethinking_2022
- Statistical Rethinking (2022 Edition)
-
Bayesian Optimization Book
Looks really promising, will give it a read through!
For those looking for an easier entry into Bayesian analysis, I would highly recommend "Statistical Rethinking" by Richard McElreath: https://xcelab.net/rm/statistical-rethinking/. Why I really like Richard's book is that it bypasses lot of the heavy mathematical/integral work, and goes straight into sampling - from my experience, you generally can't do integrals by hand, you describe your model in terms of a hierarchy of probability distributions and let an MCMC sampler take care of the rest. Richard's book touches upon causality (important and often overlooked topic in ML!), and you can follow his course online: https://github.com/rmcelreath/stat_rethinking_2022
- [E] Statistical Rethinking 2022 by Richard McElreath
botorch
-
botorch VS SMT - a user suggested alternative
2 projects | 6 Dec 2023
- BoTorch – Bayesian Optimization in PyTorch
-
Bayesian Optimization Book
Yes, I'm using a binary outcome, since that's what I get from playing a game. To get probabilities I'd have to play a lot of games with the same settings/features/point and take the mean, but it seems that defeats the point of Bayesian optimization finding the best point to evaluate for each iteration.
The SPSA method seems to work quite well with binary outcomes. This is what I was trying to beat. Unfortunately I was never able to converge faster than SPSA (or even close to that) even increasing the number of samples.
I got some feedback form the botorch team back then: https://github.com/pytorch/botorch/issues/347#:~:text=thomas...
What are some alternatives?
stat_rethinking_2020 - Statistical Rethinking Course Winter 2020/2021
Ax - Adaptive Experimentation Platform
interpretable-ml-book - Book about interpretable machine learning
noisy-bayesian-optimization - Bayesian Optimization for very Noisy functions
stat_rethinking_2023 - Statistical Rethinking Course for Jan-Mar 2023
smt - Surrogate Modeling Toolbox
optimas - Optimization at scale, powered by libEnsemble