leela-zero
KataGo
Our great sponsors
leela-zero | KataGo | |
---|---|---|
11 | 49 | |
5,225 | 3,235 | |
0.0% | - | |
0.0 | 9.3 | |
about 1 year ago | 6 days ago | |
C++ | C++ | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
leela-zero
-
I guess I have mastered the AI attack
https://github.com/leela-zero/leela-zero but it is not user friendly imo
- DailyMotion ont ils un boulevard devant eux ?
-
Human Go players beat top Go AIs using a "trick"
Yeah, see https://github.com/leela-zero/leela-zero/pull/883 for the discussions near the origin of this idea, which Leela Zero was the first to use many years ago. KataGo's implementation is a bit different in minor ways, but still based on the same mathematical idea. https://github.com/lightvector/KataGo/blob/master/cpp/search/searchhelpers.cpp#L482
-
DeepMind has open-sourced the heart of AlphaGo and AlphaZero
Totally agree. I don't even know what benefit they'd get at this point from keeping some parts locked up.
Anyway if you want something runnable Leela has a nice reimplementation: https://github.com/leela-zero/leela-zero
-
Please help me settle an argument with my friend about KataGo
See https://github.com/leela-zero/leela-zero/issues/2445 for an example with Leela Zero failing to see an atari, even *with* tons of search. This is a similar issue - neural nets have a hard time perceiving things that depend sensitively on large areas when unusual shapes are involved.
-
Go-playing trick defeats world-class Go AI—but loses to human amateurs
(https://github.com/leela-zero/leela-zero/issues/2273)
-
The blue recommended move was there like most the game screaming at me for not playing it. This doesn't look that big though? Why would this be significant?
I think it's Leela https://github.com/leela-zero/leela-zero
- 【推荐交流帖】键政累了,大家一人推荐一个翻墙后常上的网站吧
-
[D] How OpenAI Sold its Soul for $1 Billion: The company behind GPT-3 and Codex isn’t as open as it claims.
There is Leela Zero
-
Lizzie Suggests Moves Off Board in 9x9 Game
Yeah, it looks like I need to recompile Leela Zero with a 9x9 board size? https://github.com/leela-zero/leela-zero/pull/928 (and https://github.com/leela-zero/leela-zero/issues/2613)
KataGo
-
After AI beat them, professional Go players got better and more creative
> KataGo was trained with more knowledge of the game (feature engineering and loss engineering), so it trained faster.
Not really important to your point, but it's not really just that it uses more game knowledge. Mostly it's that a small but dedicated community (especially lightvector) worked hard to build on what AlphaGo and LeelaZero did. Lightvector is a genius and put a lot of effort into KataGo. It wasn't just add some game knowledge and that's it. https://github.com/lightvector/KataGo?tab=readme-ov-file#tra... has a bunch of info if you're interested.
-
Monte-Carlo Graph Search from First Principles
Immediately recognise the author as the genius behind KataGo: https://github.com/lightvector/KataGo
- Request for help getting two specific outputs from the Katago AI engine
-
KataGo should be partially resistant to cyclic groups now
(also, if you want to donate GPU time, https://katagotraining.org/ would be happy to have more people contributing to training as well!)
-
Man beats machine at Go in human victory over AI
> Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support.
My take: what Kellin Pelrine really exploited is that the AI can't learn and adapt. Even GPT can't learn or adapt to anything beyond its context window. It took a computer to find and teach him the winning strategy, and it probably took a lot longer than AlphaGo did to train. But once he learned, he had the advantage; meanwhile AlphaGo never adapted and learned to counter the strategy itself, because it can't.
One thing to note is that he beat KataGo [1] and Leela Zero [2], but not AlphaGo or AlphaZero, because the AlphaGos aren't public. So it's possible he wouldn't actually beat the real AlphaZero with this strategy. But considering the strategy he used works in theory work against any model with AlphaGo/AlphaZero's design (he beat Leela Zero which has the exact same model), and Leela Chess and Stockfish are apparently better than AlphaZero now; I think he would still win.
[1] https://github.com/lightvector/KataGo
[2] https://github.com/leela-zero/leela-zero
Experimentally, KataGo did also try some limited ways of using external data at the end of its June 2020 run, and has continued to do so into its most recent public distributed run, "kata1" at https://katagotraining.org/. External data is not necessary for reaching top levels of play, but still appears to provide some mild benefits against some opponents, and noticeable benefits in a useful analysis tool for a variety of kinds of situations that don't occur in self-play but that do occur in human games and games that users wish to analyze.
-
I wonder if these ChatGPT answers will every get nuked
I've been using ChatGPT since launch and constantly seeking out examples of how others have been using it. A few years ago I started using KataGo with Sabaki to improve my go-playing abilities. I've known about token embeddings in neural networks before ChatGPT was a twinkle in OpenAI's eye. I was there, but I haven't seen everything you've seen, so please show me. If the truth is that ChatGPT has canned responses to some prompt or set of prompts, then I want to believe that it does. If I have misconceptions about anything, I want to break those misconceptions. As long as your beliefs and mine contradict one another, one of us has the opportunity to learn.
-
Human Go players beat top Go AIs using a "trick"
For some stuff besides LCB, see https://github.com/lightvector/KataGo/blob/master/docs/KataGoMethods.md for a summary of a few more recent other things KataGo added that hadn't been done in earlier bots.
-
DeepMind has open-sourced the heart of AlphaGo and AlphaZero
I'd suggest KataGo, which is much stronger and more actively developed than Leela Zero https://github.com/lightvector/KataGo
- KataGo changes training framework from TensorFlow to PyTorch
What are some alternatives?
alpha-zero-boosted - A "build to learn" Alpha Zero implementation using Gradient Boosted Decision Trees (LightGBM)
opensea-js - TypeScript SDK for the OpenSea marketplace
katrain - Improve your Baduk skills by training with KataGo!
mctx - Monte Carlo tree search in JAX
online-go.com - Source code for the Online-Go.com web interface
koneko - 🐈🌐 nyaa.si terminal BitTorrent tracker
lizzie - Lizzie - Leela Zero Interface
leela-zero - Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper.
nnue-pytorch - Stockfish NNUE (Chess evaluation) trainer in Pytorch
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
BadukMegapack - Installer for various AI Baduk softwares