- alpha-zero-boosted VS KataGo
- alpha-zero-boosted VS neural_network_chess
- alpha-zero-boosted VS katrain
- alpha-zero-boosted VS adversarial-robustness-toolbox
- alpha-zero-boosted VS leela-zero
- alpha-zero-boosted VS mars
- alpha-zero-boosted VS mctx
- alpha-zero-boosted VS boardlaw
- alpha-zero-boosted VS mljar-supervised
- alpha-zero-boosted VS Rating-Correlations
Alpha-zero-boosted Alternatives
Similar projects and alternatives to alpha-zero-boosted
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
neural_network_chess
Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)
-
adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
-
mars
Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
mljar-supervised
Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation
-
Rating-Correlations
Predicts chess960 or crazyhouse ratings given bullet or blitz and others for either Lichess.org or Chess.com servers.
alpha-zero-boosted reviews and mentions
-
DeepMind has open-sourced the heart of AlphaGo and AlphaZero
> I came up with a nifty implementation in Python that outperforms the naive impl by 30x, allowing a pure python MCTS/NN interop implementation. See https://www.moderndescartes.com/essays/deep_dive_mcts/
Great post!
Chasing pointers in the MCTS tree is definitely a slow approach. Although typically there are < 900 "considerations" per move for alphazero. I've found getting value/policy predictions from a neural network (or GBDT[1]) for the node expansions during those considerations is at least an order of magnitude slower than the MCTS tree-hopping logic.
[1] https://github.com/cgreer/alpha-zero-boosted
-
MuZero: Mastering Go, chess, shogi and Atari without rules
What you can do is checkout the algorithm at a particular stages of development. AlphaZero&Friends start out not being very good at the game, then over time they learn and become super human. You typically checkpoint the weights for the model at various stages. So early on, the algo would be like a 600 elo player for chess and then eventually get to superhuman elo levels. So if you wanted to train you can gradually play against versions of the algo until you can beat them by loading up the weights at various difficulty stages.
I implemented AlphaZero (but not Mu yet) using GBDTs instead of NNs here if you're curious about how it would work: https://github.com/cgreer/alpha-zero-boosted. Instead of saving the "weights" for a GBDT, you save the splitpoints for the value/policy models, but the concept is the same.
Stats
The primary programming language of alpha-zero-boosted is Python.
Popular Comparisons
Sponsored