nnue-pytorch
KataGo
nnue-pytorch | KataGo | |
---|---|---|
14 | 49 | |
283 | 3,235 | |
3.5% | - | |
6.3 | 9.3 | |
about 1 month ago | 10 days ago | |
C++ | C++ | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nnue-pytorch
-
Are Super-GMs far more cautious in opening choice than they were even ten years ago?
There's extremely detailed information on how Stockfish's neural network evaluation works, but none of this will tell you the why of why the engines' assessment changed: https://github.com/glinscott/nnue-pytorch/blob/master/docs/nnue.md
- Why are people using bitboards for chess input?
- Resources for learning and implementing a NNUE for a chess engine?
-
I am the first author of Stockfish. Ask me anything.
If you want a readable explanation of all the details, this document is phenomenal.
- What's a simple engine to modify? (Preferably in Python)
-
"RL Fine-Tuning: Scalable Online Planning via Reinforcement Learning Fine-Tuning", Fickinger et al 2021 {FB}
Getting SOTA in chess would be earth-shattering, especially since Stockfish has now adopted very light-weight NNs (called NNUE) and has doubled down on alpha-beta search, regaining the upper hand against A0 style programs.
- Where would an absolute beginner to neural networks start when trying to learn how to build a NNUE evaluation function?
-
Stockfish 14 Released
Stockfish NNUE is deep network. You can find out more about it's architecture and internal working here:
https://github.com/glinscott/nnue-pytorch/blob/master/docs/n...
It's pretty interesting read.
-
Official release version of Stockfish 14
[0] https://tests.stockfishchess.org/tests/view/60dae5363beab81350aca077 [1] https://nextchessmove.com/dev-builds [2] https://stockfishchess.org/blog/2021/stockfish-13/ [3] https://lczero.org/blog/2021/06/the-importance-of-open-data/ [4] https://github.com/official-stockfish/Stockfish/commit/e8d64af1 [5] https://github.com/glinscott/nnue-pytorch/ [6] https://stockfishchess.org/get-involved/
-
How do Neural Networks work?
https://github.com/glinscott/nnue-pytorch/blob/master/docs/nnue.md There is some info there.
KataGo
-
After AI beat them, professional Go players got better and more creative
> KataGo was trained with more knowledge of the game (feature engineering and loss engineering), so it trained faster.
Not really important to your point, but it's not really just that it uses more game knowledge. Mostly it's that a small but dedicated community (especially lightvector) worked hard to build on what AlphaGo and LeelaZero did. Lightvector is a genius and put a lot of effort into KataGo. It wasn't just add some game knowledge and that's it. https://github.com/lightvector/KataGo?tab=readme-ov-file#tra... has a bunch of info if you're interested.
-
Monte-Carlo Graph Search from First Principles
Immediately recognise the author as the genius behind KataGo: https://github.com/lightvector/KataGo
- Request for help getting two specific outputs from the Katago AI engine
-
KataGo should be partially resistant to cyclic groups now
(also, if you want to donate GPU time, https://katagotraining.org/ would be happy to have more people contributing to training as well!)
-
Man beats machine at Go in human victory over AI
> Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support.
My take: what Kellin Pelrine really exploited is that the AI can't learn and adapt. Even GPT can't learn or adapt to anything beyond its context window. It took a computer to find and teach him the winning strategy, and it probably took a lot longer than AlphaGo did to train. But once he learned, he had the advantage; meanwhile AlphaGo never adapted and learned to counter the strategy itself, because it can't.
One thing to note is that he beat KataGo [1] and Leela Zero [2], but not AlphaGo or AlphaZero, because the AlphaGos aren't public. So it's possible he wouldn't actually beat the real AlphaZero with this strategy. But considering the strategy he used works in theory work against any model with AlphaGo/AlphaZero's design (he beat Leela Zero which has the exact same model), and Leela Chess and Stockfish are apparently better than AlphaZero now; I think he would still win.
[1] https://github.com/lightvector/KataGo
[2] https://github.com/leela-zero/leela-zero
Experimentally, KataGo did also try some limited ways of using external data at the end of its June 2020 run, and has continued to do so into its most recent public distributed run, "kata1" at https://katagotraining.org/. External data is not necessary for reaching top levels of play, but still appears to provide some mild benefits against some opponents, and noticeable benefits in a useful analysis tool for a variety of kinds of situations that don't occur in self-play but that do occur in human games and games that users wish to analyze.
-
I wonder if these ChatGPT answers will every get nuked
I've been using ChatGPT since launch and constantly seeking out examples of how others have been using it. A few years ago I started using KataGo with Sabaki to improve my go-playing abilities. I've known about token embeddings in neural networks before ChatGPT was a twinkle in OpenAI's eye. I was there, but I haven't seen everything you've seen, so please show me. If the truth is that ChatGPT has canned responses to some prompt or set of prompts, then I want to believe that it does. If I have misconceptions about anything, I want to break those misconceptions. As long as your beliefs and mine contradict one another, one of us has the opportunity to learn.
-
Human Go players beat top Go AIs using a "trick"
For some stuff besides LCB, see https://github.com/lightvector/KataGo/blob/master/docs/KataGoMethods.md for a summary of a few more recent other things KataGo added that hadn't been done in earlier bots.
-
DeepMind has open-sourced the heart of AlphaGo and AlphaZero
I'd suggest KataGo, which is much stronger and more actively developed than Leela Zero https://github.com/lightvector/KataGo
- KataGo changes training framework from TensorFlow to PyTorch
What are some alternatives?
Ceres - Ceres - an MCTS chess engine for research and recreation
alpha-zero-boosted - A "build to learn" Alpha Zero implementation using Gradient Boosted Decision Trees (LightGBM)
Stockfish - A free and strong UCI chess engine
katrain - Improve your Baduk skills by training with KataGo!
irwin - irwin - the protector of lichess from all chess players villainous
online-go.com - Source code for the Online-Go.com web interface
Koivisto - UCI Chess engine
lizzie - Lizzie - Leela Zero Interface
fishtest - The Stockfish testing framework
BadukMegapack - Installer for various AI Baduk softwares
chessx - Sources of the official ChessX version.
leela-zero - Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper.