- searchless_chess VS chessmate
- searchless_chess VS chess-transformers
- searchless_chess VS claude-code-proxy
- searchless_chess VS KataGo
- searchless_chess VS lila
- searchless_chess VS Stockfish
- searchless_chess VS chat.md
- searchless_chess VS augment-swebench-agent
- searchless_chess VS chess_gpt_eval
- searchless_chess VS SWE-bench
Searchless_chess Alternatives
Similar projects and alternatives to searchless_chess
-
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
-
-
-
-
-
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
-
-
-
searchless_chess discussion
searchless_chess reviews and mentions
-
OpenAI o3 and o4-mini – OpenAI
>These models cannot even make legal chess moves. That’s incredibly basic logic, and it shows how LLMs are still completely incapable of reasoning or understanding.
Yeah they can. There's a link I shared to prove it which you've conveniently ignored.
LLMs trained on chess games play chess just fine. They don't make silly mistakes and they very rarely make illegal moves.
There's gpt-3.5-turbo-instruct which i already shared and plays at around 1800 ELO. Then there's this grandmaster level chess transformer - https://arxiv.org/abs/2402.04494
So are they capable of reasoning now or would you like to shift the posts ?
-
Grandmaster-Level Chess Without Search
This repository provides an implementation of our paper Grandmaster-Level Chess Without Search. https://arxiv.org/abs/2402.04494
The recent breakthrough successes in machine learning are mainly attributed to scale: namely large-scale attention-based architectures and datasets of unprecedented scale. This paper investigates the impact of training at scale for chess. Unlike traditional chess engines that rely on complex heuristics, explicit search, or a combination of both, we train a 270M parameter transformer model with supervised learning on a dataset of 10 million chess games. We annotate each board in the dataset with action-values provided by the powerful Stockfish 16 engine, leading to roughly 15 billion data points. Our largest model reaches a Lichess blitz Elo of 2895 against humans, and successfully solves a series of challenging chess puzzles, without any domain-specific tweaks or explicit search algorithms. We also show that our model outperforms AlphaZero's policy and value networks (without MCTS) and GPT-3.5-turbo-instruct. A systematic investigation of model and dataset size shows that strong chess performance only arises at sufficient scale. To validate our results, we perform an extensive series of ablations of design choices and hyperparameters.
Stats
google-deepmind/searchless_chess is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of searchless_chess is Python.