open_spiel VS RustyNEAT

Compare open_spiel vs RustyNEAT and see what are their differences.

open_spiel

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. (by google-deepmind)

RustyNEAT

Rust implementation of NEAT algorithm (HyperNEAT + ES-HyperNEAT + NoveltySearch + CTRNN + L-systems) (by aleksander-mendoza)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
open_spiel RustyNEAT
44 2
3,999 0
1.5% -
9.5 7.8
5 days ago over 2 years ago
C++ Rust
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

open_spiel

Posts with mentions or reviews of open_spiel. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-26.

RustyNEAT

Posts with mentions or reviews of RustyNEAT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-10-05.
  • Any tutorial on how to create RL C++ environments?
    7 projects | /r/reinforcementlearning | 5 Oct 2021
    If you want to really speed up your environment several orders of magnitude, you can implement it in cuda/vulkan /opencl. Here is an example of what I did in vulkan https://mobile.twitter.com/MendozaDrosik It allows me to stimulate thousands of agents in parallel. Works wonders especially if you want to use genetic algorithms. If you're interested, I might make python bindings to my minecraft environment. If you write in rust (like I do), then you can add python bindings very easily with PyO3. This is what I did here https://github.com/aleksander-mendoza/RustyNEAT/blob/main/rusty_neat_quick_guide.py (it's GPU accelerated implementation of NEAT algorithm)
  • Would I be able to train basic deep RL models on the m1 MacBook Air?
    1 project | /r/reinforcementlearning | 26 Aug 2021

What are some alternatives?

When comparing open_spiel and RustyNEAT you can also consider the following projects:

muzero-general - MuZero

brax - Massively parallel rigidbody physics simulation on accelerator hardware.

PettingZoo - An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

tiny-differentiable-simulator - Tiny Differentiable Simulator is a header-only C++ and CUDA physics library for reinforcement learning and robotics with zero dependencies.

gym - A toolkit for developing and comparing reinforcement learning algorithms.

ReinforcementLearning.jl - A reinforcement learning package for Julia

rlcard - Reinforcement Learning / AI Bots in Card (Poker) Games - Blackjack, Leduc, Texas, DouDizhu, Mahjong, UNO.

procgen - Procgen Benchmark: Procedurally-Generated Game-Like Gym-Environments

gym-battleship - Battleship environment for reinforcement learning tasks

Numba - NumPy aware dynamic Python compiler using LLVM

TexasHoldemSolverJava - A Java implemented Texas holdem and short deck Solver

tensortrade - An open source reinforcement learning framework for training, evaluating, and deploying robust trading agents.