RustyNEAT VS open_spiel

Compare RustyNEAT vs open_spiel and see what are their differences.

RustyNEAT

Rust implementation of NEAT algorithm (HyperNEAT + ES-HyperNEAT + NoveltySearch + CTRNN + L-systems) (by aleksander-mendoza)

open_spiel

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. (by google-deepmind)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
RustyNEAT open_spiel
2 44
0 4,004
- 0.8%
7.8 9.5
over 2 years ago 1 day ago
Rust C++
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

RustyNEAT

Posts with mentions or reviews of RustyNEAT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-10-05.
  • Any tutorial on how to create RL C++ environments?
    7 projects | /r/reinforcementlearning | 5 Oct 2021
    If you want to really speed up your environment several orders of magnitude, you can implement it in cuda/vulkan /opencl. Here is an example of what I did in vulkan https://mobile.twitter.com/MendozaDrosik It allows me to stimulate thousands of agents in parallel. Works wonders especially if you want to use genetic algorithms. If you're interested, I might make python bindings to my minecraft environment. If you write in rust (like I do), then you can add python bindings very easily with PyO3. This is what I did here https://github.com/aleksander-mendoza/RustyNEAT/blob/main/rusty_neat_quick_guide.py (it's GPU accelerated implementation of NEAT algorithm)
  • Would I be able to train basic deep RL models on the m1 MacBook Air?
    1 project | /r/reinforcementlearning | 26 Aug 2021

open_spiel

Posts with mentions or reviews of open_spiel. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-26.

What are some alternatives?

When comparing RustyNEAT and open_spiel you can also consider the following projects:

brax - Massively parallel rigidbody physics simulation on accelerator hardware.

muzero-general - MuZero

tiny-differentiable-simulator - Tiny Differentiable Simulator is a header-only C++ and CUDA physics library for reinforcement learning and robotics with zero dependencies.

PettingZoo - An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

ReinforcementLearning.jl - A reinforcement learning package for Julia

gym - A toolkit for developing and comparing reinforcement learning algorithms.

procgen - Procgen Benchmark: Procedurally-Generated Game-Like Gym-Environments

rlcard - Reinforcement Learning / AI Bots in Card (Poker) Games - Blackjack, Leduc, Texas, DouDizhu, Mahjong, UNO.

Numba - NumPy aware dynamic Python compiler using LLVM

gym-battleship - Battleship environment for reinforcement learning tasks

TexasHoldemSolverJava - A Java implemented Texas holdem and short deck Solver

tensortrade - An open source reinforcement learning framework for training, evaluating, and deploying robust trading agents.