chess VS q-learning-algorithms

Compare chess vs q-learning-algorithms and see what are their differences.

q-learning-algorithms

This repository will aim to provide implementations of q-learning algorithms (DQN, Double-DQN, ...) using Pytorch. (by thomashirtz)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
chess q-learning-algorithms
2 1
19 4
- -
0.0 0.0
about 2 years ago almost 3 years ago
Python Python
MIT License -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

chess

Posts with mentions or reviews of chess. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-12-02.

q-learning-algorithms

Posts with mentions or reviews of q-learning-algorithms. We have used some of these posts to build our list of alternatives and similar projects.
  • actor-critic algorithms
    1 project | /r/reinforcementlearning | 11 Apr 2021
    I learn quite some things about reinforcement learning in the last months, and I feel like I understand much better deep-Q learning algorithms (if you want, you can check my [repo](https://github.com/thomashirtz/q-learning-algorithms). I would like to change a little bit my focus towards actor-critics algorithms now. The only thing is, I feel like in comparison to Q-learning algorithms, the explanations of the papers are not as precise as for Q-learning, and explanations on the internet diverge really greatly (e.g. the original paper does not give the A2C but only the A3C for one learner).

What are some alternatives?

When comparing chess and q-learning-algorithms you can also consider the following projects:

Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros

bomberland - Bomberland: a multi-agent AI competition based on Bomberman. This repository contains both starter / hello world kits + the engine source code

python-chess-annotator - Reads chess games in PGN format and adds annotations using an engine

AgileRL - Streamlining reinforcement learning with RLOps. State-of-the-art RL algorithms and tools.

sapai - Super auto pets engine built with reinforment learning training in mind

fragile - Framework for building algorithms based on FractalAI

muzero-general - MuZero

neural_network_chess - Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

chappie.ai - Generalized AI to perform a multitude of tasks written in python3

neptune-contrib - This library is a location of the LegacyLogger for PyTorch Lightning.

chess-book-study - A simple companion app for when you are reading chess pdfs.

chesscog - Determining chess game state from an image.