minimalRL VS DeepRL-TensorFlow2

Compare minimalRL vs DeepRL-TensorFlow2 and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
minimalRL DeepRL-TensorFlow2
5 2
2,725 573
- -
1.6 0.0
about 1 year ago almost 2 years ago
Python Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

minimalRL

Posts with mentions or reviews of minimalRL. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-07-18.

DeepRL-TensorFlow2

Posts with mentions or reviews of DeepRL-TensorFlow2. We have used some of these posts to build our list of alternatives and similar projects.
  • PPO implementation in TensorFlow2
    1 project | /r/reinforcementlearning | 12 Sep 2021
    I've been searching for a clean, good, and understandable implementation of PPO for continuous action space with TF2 witch is understandable enough for me to apply my modifications, but the closest thing that I have found is this code which seems to not work properly even on a simple gym cartpole env (discussed issues in git-hub repo suggest the same problem) so I have some doubts :). I was wondering whether you could recommend an implementation that you trust and suggest :)
  • Question about using tf.stop_gradient in separate Actor-Critic networks for A2C implementation for TF2
    1 project | /r/reinforcementlearning | 24 Mar 2021
    I have been looking at this implementation of A2C. Here the author of the code uses stop_gradient only on the critic network at L90 bur not in the actor network L61 for the continuous case. However , it is used both in actor and critic networks for the discrete case. Can someone explain me why?

What are some alternatives?

When comparing minimalRL and DeepRL-TensorFlow2 you can also consider the following projects:

ElegantRL - Massively Parallel Deep Reinforcement Learning. 🔥

soft-actor-critic - Re-implementation of Soft-Actor-Critic (SAC) in TensorFlow 2.0

Pytorch-PCGrad - Pytorch reimplementation for "Gradient Surgery for Multi-Task Learning"

tensorforce - Tensorforce: a TensorFlow library for applied reinforcement learning

rlpyt - Reinforcement Learning in PyTorch

TensorFlow2.0-for-Deep-Reinforcement-Learning - TensorFlow 2.0 for Deep Reinforcement Learning. :octopus:

pomdp-baselines - Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022

ydata-synthetic - Synthetic data generators for tabular and time-series data

deep-RL-trading - playing idealized trading games with deep reinforcement learning

machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...

ultimate-volleyball - 3D RL Volleyball environment built on Unity ML-Agents

tf2multiagentrl - Clean implementation of Multi-Agent Reinforcement Learning methods (MADDPG, MATD3, MASAC, MAD4PG) in TensorFlow 2.x