tinyzero
Easily train AlphaZero-like agents on any environment you want! (by s-casci)
boardlaw
Scaling scaling laws with board games. (by andyljones)
tinyzero | boardlaw | |
---|---|---|
2 | 1 | |
395 | 36 | |
- | - | |
8.5 | 2.9 | |
4 months ago | 10 months ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tinyzero
Posts with mentions or reviews of tinyzero.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-12-20.
-
Show HN: Easily train AlphaZero-like agents on any environment you want
This repo and the code files appear to be missing any licensing details
You'll also likely want to mention the "needs python >= 3.8" in the readme https://github.com/s-casci/tinyzero/blob/244a263976cd9a09f5f... OT1H, I would hope folks are keeping their pythons current, but OTOH dev environments are gonna dev environment
boardlaw
Posts with mentions or reviews of boardlaw.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Debugging reinforcement learning
The 'probe envs' section further down gives one method for achieving this. Here's a concrete example from my recent work, where I'm building out a parallel MCTS (tricky!). There are three tests in the section I've highlighted, all testing the ability of the MCTS to estimate the value of a state in increasingly complex circumstances. All the tests decisively pass or fail because I sub'd out the env and agent for simple, deterministic variants. More, if - say - the trivial_test which uses a single player passes, but the test_two_player fails, that tells me the problem's something to do with how I'm handling multiple players.
What are some alternatives?
When comparing tinyzero and boardlaw you can also consider the following projects:
chess - Program for playing chess in the console against AI or human opponents
neural_network_chess - Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)
alpha-zero-boosted - A "build to learn" Alpha Zero implementation using Gradient Boosted Decision Trees (LightGBM)