leela-zero
alpha-zero-boosted
Our great sponsors
leela-zero | alpha-zero-boosted | |
---|---|---|
11 | 2 | |
5,225 | 79 | |
0.0% | - | |
0.0 | 3.2 | |
about 1 year ago | almost 4 years ago | |
C++ | Python | |
GNU General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
leela-zero
-
I guess I have mastered the AI attack
https://github.com/leela-zero/leela-zero but it is not user friendly imo
- DailyMotion ont ils un boulevard devant eux ?
-
Human Go players beat top Go AIs using a "trick"
Yeah, see https://github.com/leela-zero/leela-zero/pull/883 for the discussions near the origin of this idea, which Leela Zero was the first to use many years ago. KataGo's implementation is a bit different in minor ways, but still based on the same mathematical idea. https://github.com/lightvector/KataGo/blob/master/cpp/search/searchhelpers.cpp#L482
-
DeepMind has open-sourced the heart of AlphaGo and AlphaZero
Totally agree. I don't even know what benefit they'd get at this point from keeping some parts locked up.
Anyway if you want something runnable Leela has a nice reimplementation: https://github.com/leela-zero/leela-zero
-
Please help me settle an argument with my friend about KataGo
See https://github.com/leela-zero/leela-zero/issues/2445 for an example with Leela Zero failing to see an atari, even *with* tons of search. This is a similar issue - neural nets have a hard time perceiving things that depend sensitively on large areas when unusual shapes are involved.
-
Go-playing trick defeats world-class Go AI—but loses to human amateurs
(https://github.com/leela-zero/leela-zero/issues/2273)
-
The blue recommended move was there like most the game screaming at me for not playing it. This doesn't look that big though? Why would this be significant?
I think it's Leela https://github.com/leela-zero/leela-zero
- 【推荐交流帖】键政累了,大家一人推荐一个翻墙后常上的网站吧
-
[D] How OpenAI Sold its Soul for $1 Billion: The company behind GPT-3 and Codex isn’t as open as it claims.
There is Leela Zero
-
Lizzie Suggests Moves Off Board in 9x9 Game
Yeah, it looks like I need to recompile Leela Zero with a 9x9 board size? https://github.com/leela-zero/leela-zero/pull/928 (and https://github.com/leela-zero/leela-zero/issues/2613)
alpha-zero-boosted
-
DeepMind has open-sourced the heart of AlphaGo and AlphaZero
> I came up with a nifty implementation in Python that outperforms the naive impl by 30x, allowing a pure python MCTS/NN interop implementation. See https://www.moderndescartes.com/essays/deep_dive_mcts/
Great post!
Chasing pointers in the MCTS tree is definitely a slow approach. Although typically there are < 900 "considerations" per move for alphazero. I've found getting value/policy predictions from a neural network (or GBDT[1]) for the node expansions during those considerations is at least an order of magnitude slower than the MCTS tree-hopping logic.
[1] https://github.com/cgreer/alpha-zero-boosted
-
MuZero: Mastering Go, chess, shogi and Atari without rules
What you can do is checkout the algorithm at a particular stages of development. AlphaZero&Friends start out not being very good at the game, then over time they learn and become super human. You typically checkpoint the weights for the model at various stages. So early on, the algo would be like a 600 elo player for chess and then eventually get to superhuman elo levels. So if you wanted to train you can gradually play against versions of the algo until you can beat them by loading up the weights at various difficulty stages.
I implemented AlphaZero (but not Mu yet) using GBDTs instead of NNs here if you're curious about how it would work: https://github.com/cgreer/alpha-zero-boosted. Instead of saving the "weights" for a GBDT, you save the splitpoints for the value/policy models, but the concept is the same.
What are some alternatives?
KataGo - GTP engine and self-play learning in Go
opensea-js - TypeScript SDK for the OpenSea marketplace
neural_network_chess - Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)
mctx - Monte Carlo tree search in JAX
katrain - Improve your Baduk skills by training with KataGo!
koneko - 🐈🌐 nyaa.si terminal BitTorrent tracker
adversarial-robustness-toolbox - Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
leela-zero - Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper.
mars - Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions.
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.