mtrl
Multi Task RL Baselines (by facebookresearch)
cleanrl
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG) (by vwxyzjn)
mtrl | cleanrl | |
---|---|---|
1 | 41 | |
211 | 4,493 | |
- | - | |
0.0 | 6.3 | |
over 2 years ago | 9 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mtrl
Posts with mentions or reviews of mtrl.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-04-30.
-
Best PyTorch RL library for doing research
MTRL for multi-task RL
cleanrl
Posts with mentions or reviews of cleanrl.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-08-24.
-
[P] PettingZoo 1.24.0 has been released (including Stable-Baselines3 tutorials)
PettingZoo 1.24.0 is now live! This release includes Python 3.11 support, updated Chess and Hanabi environment versions, and many bugfixes, documentation updates and testing expansions. We are also very excited to announce 3 tutorials using Stable-Baselines3, and a full training script using CleanRL with TensorBoard and WandB.
-
PPO agent for "2048": help requested
Here's where the problem starts: after implementing a custom environment that follows the typical gymnasium interface, and use a slightly adjusted PPO implementation from CleanRL, I cannot get the agent to learn anything at all, even though this specific implementation seems to work just fine on basic gymnasium examples. I am hoping the RL community here can help me with some useful pointers.
- [P] 10x faster reinforcement learning hyperparameter optimization than SOTA - now with distributed training!
-
PPO ignores high rewards in deterministic sytem
Try out a standard implementation with some standard parameters from here: https://github.com/vwxyzjn/cleanrl/tree/master/cleanrl
-
SB3 - NotImplementedError: Box([-1. -1. -8.], [1. 1. 8.], (3,), <class 'numpy.float32'>) observation space is not supported
I am trying to run cleanrl on the `Pendulum-v1` environment. I did that by going here and changing the default `env-id` to ` parser.add_argument("--env-id", type=str, default="Pendulum-v1",
- Cartpole and mountain car
-
cleanrl gym issues
git clone https://github.com/vwxyzjn/cleanrl.git && cd cleanrl poetry install
-
Why is my Soft Actor Critic Algorithm not learning?
Can someone please help me debug my implementation of SAC. Please let me know if you have any questions. I tried comparing my work with CleanRL and caught a couple of errors. However, my implementation does diverge a lot from theirs as I wanted to test my understanding.
-
Model-based hierarchical reinforcement learning
Shameless self-plug: as far as implementation is concerned, I am working on a (hopefully) easier to understand Dreamer architecture under the CleanRL library, toward also re-implementing Director, Dreamer-v3, and and JAX variant for faster training.
-
[P] Robust Policy Optimization is now in CleanRL 🔥!
Happy to share that CleanRL now has a new algorithm called Robust Policy Optimization — 5 lines of code change to PPO to get better performance in 57 out of 61 continuous action envs 🚀 (e.g., dm_control)
What are some alternatives?
When comparing mtrl and cleanrl you can also consider the following projects:
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Deep-Reinforcement-Learning-Algorithms-with-PyTorch - PyTorch implementations of deep reinforcement learning algorithms and environments
tianshou - An elegant PyTorch deep reinforcement learning library.
d3rlpy - An offline deep reinforcement learning library
machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
reinforcement-learning-discord-wiki - The RL discord wiki
mbrl-lib - Library for Model Based RL
rlpyt - Reinforcement Learning in PyTorch
mtrl vs stable-baselines3
cleanrl vs stable-baselines3
mtrl vs Deep-Reinforcement-Learning-Algorithms-with-PyTorch
cleanrl vs tianshou
mtrl vs tianshou
cleanrl vs d3rlpy
mtrl vs machin
cleanrl vs reinforcement-learning-discord-wiki
mtrl vs mbrl-lib
cleanrl vs mbrl-lib
mtrl vs rlpyt
cleanrl vs machin