open_spiel VS RustyNEAT

Compare open_spiel vs RustyNEAT and see what are their differences.

open_spiel

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. (by google-deepmind)

RustyNEAT

Rust implementation of NEAT algorithm (HyperNEAT + ES-HyperNEAT + NoveltySearch + CTRNN + L-systems) (by aleksander-mendoza)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
open_spiel RustyNEAT
44 2
3,969 0
1.4% -
9.4 7.8
3 days ago over 2 years ago
C++ Rust
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

open_spiel

Posts with mentions or reviews of open_spiel. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-26.

RustyNEAT

Posts with mentions or reviews of RustyNEAT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-10-05.
  • Any tutorial on how to create RL C++ environments?
    7 projects | /r/reinforcementlearning | 5 Oct 2021
    If you want to really speed up your environment several orders of magnitude, you can implement it in cuda/vulkan /opencl. Here is an example of what I did in vulkan https://mobile.twitter.com/MendozaDrosik It allows me to stimulate thousands of agents in parallel. Works wonders especially if you want to use genetic algorithms. If you're interested, I might make python bindings to my minecraft environment. If you write in rust (like I do), then you can add python bindings very easily with PyO3. This is what I did here https://github.com/aleksander-mendoza/RustyNEAT/blob/main/rusty_neat_quick_guide.py (it's GPU accelerated implementation of NEAT algorithm)

What are some alternatives?

When comparing open_spiel and RustyNEAT you can also consider the following projects:

muzero-general - MuZero

PettingZoo - An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

gym - A toolkit for developing and comparing reinforcement learning algorithms.

rlcard - Reinforcement Learning / AI Bots in Card (Poker) Games - Blackjack, Leduc, Texas, DouDizhu, Mahjong, UNO.

gym-battleship - Battleship environment for reinforcement learning tasks

TexasHoldemSolverJava - A Java implemented Texas holdem and short deck Solver

brax - Massively parallel rigidbody physics simulation on accelerator hardware.

tensortrade - An open source reinforcement learning framework for training, evaluating, and deploying robust trading agents.

ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

tiny-differentiable-simulator - Tiny Differentiable Simulator is a header-only C++ and CUDA physics library for reinforcement learning and robotics with zero dependencies.

AirSim - Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research