alpha-zero-boosted VS mars

Compare alpha-zero-boosted vs mars and see what are their differences.

alpha-zero-boosted

A "build to learn" Alpha Zero implementation using Gradient Boosted Decision Trees (LightGBM) (by cgreer)

mars

Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions. (by mars-project)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
alpha-zero-boosted mars
2 -
79 2,677
- 0.2%
3.2 5.7
almost 4 years ago 4 months ago
Python Python
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

alpha-zero-boosted

Posts with mentions or reviews of alpha-zero-boosted. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-15.
  • DeepMind has open-sourced the heart of AlphaGo and AlphaZero
    4 projects | news.ycombinator.com | 15 Feb 2023
    > I came up with a nifty implementation in Python that outperforms the naive impl by 30x, allowing a pure python MCTS/NN interop implementation. See https://www.moderndescartes.com/essays/deep_dive_mcts/

    Great post!

    Chasing pointers in the MCTS tree is definitely a slow approach. Although typically there are < 900 "considerations" per move for alphazero. I've found getting value/policy predictions from a neural network (or GBDT[1]) for the node expansions during those considerations is at least an order of magnitude slower than the MCTS tree-hopping logic.

    [1] https://github.com/cgreer/alpha-zero-boosted

  • MuZero: Mastering Go, chess, shogi and Atari without rules
    3 projects | news.ycombinator.com | 23 Dec 2020
    What you can do is checkout the algorithm at a particular stages of development. AlphaZero&Friends start out not being very good at the game, then over time they learn and become super human. You typically checkpoint the weights for the model at various stages. So early on, the algo would be like a 600 elo player for chess and then eventually get to superhuman elo levels. So if you wanted to train you can gradually play against versions of the algo until you can beat them by loading up the weights at various difficulty stages.

    I implemented AlphaZero (but not Mu yet) using GBDTs instead of NNs here if you're curious about how it would work: https://github.com/cgreer/alpha-zero-boosted. Instead of saving the "weights" for a GBDT, you save the splitpoints for the value/policy models, but the concept is the same.

mars

Posts with mentions or reviews of mars. We have used some of these posts to build our list of alternatives and similar projects.

We haven't tracked posts mentioning mars yet.
Tracking mentions began in Dec 2020.

What are some alternatives?

When comparing alpha-zero-boosted and mars you can also consider the following projects:

KataGo - GTP engine and self-play learning in Go

modin - Modin: Scale your Pandas workflows by changing a single line of code

neural_network_chess - Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

eland - Python Client and Toolkit for DataFrames, Big Data, Machine Learning and ETL in Elasticsearch

katrain - Improve your Baduk skills by training with KataGo!

xarray - N-D labeled arrays and datasets in Python

adversarial-robustness-toolbox - Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Python-Schema-Matching - A python tool using XGboost and sentence-transformers to perform schema matching task on tables.

leela-zero - Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper.

scikit-survival - Survival analysis built on top of scikit-learn

mctx - Monte Carlo tree search in JAX

dmatrix2np - Convert XGBoost's DMatrix format to np.array