cleanrl
spinningup
cleanrl | spinningup | |
---|---|---|
41 | 8 | |
4,459 | 9,653 | |
- | 1.2% | |
6.3 | 0.0 | |
6 days ago | 14 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cleanrl
-
[P] PettingZoo 1.24.0 has been released (including Stable-Baselines3 tutorials)
PettingZoo 1.24.0 is now live! This release includes Python 3.11 support, updated Chess and Hanabi environment versions, and many bugfixes, documentation updates and testing expansions. We are also very excited to announce 3 tutorials using Stable-Baselines3, and a full training script using CleanRL with TensorBoard and WandB.
-
PPO agent for "2048": help requested
Here's where the problem starts: after implementing a custom environment that follows the typical gymnasium interface, and use a slightly adjusted PPO implementation from CleanRL, I cannot get the agent to learn anything at all, even though this specific implementation seems to work just fine on basic gymnasium examples. I am hoping the RL community here can help me with some useful pointers.
- [P] 10x faster reinforcement learning hyperparameter optimization than SOTA - now with distributed training!
-
PPO ignores high rewards in deterministic sytem
Try out a standard implementation with some standard parameters from here: https://github.com/vwxyzjn/cleanrl/tree/master/cleanrl
-
SB3 - NotImplementedError: Box([-1. -1. -8.], [1. 1. 8.], (3,), <class 'numpy.float32'>) observation space is not supported
I am trying to run cleanrl on the `Pendulum-v1` environment. I did that by going here and changing the default `env-id` to ` parser.add_argument("--env-id", type=str, default="Pendulum-v1",
- Cartpole and mountain car
-
cleanrl gym issues
git clone https://github.com/vwxyzjn/cleanrl.git && cd cleanrl poetry install
-
Why is my Soft Actor Critic Algorithm not learning?
Can someone please help me debug my implementation of SAC. Please let me know if you have any questions. I tried comparing my work with CleanRL and caught a couple of errors. However, my implementation does diverge a lot from theirs as I wanted to test my understanding.
-
Model-based hierarchical reinforcement learning
Shameless self-plug: as far as implementation is concerned, I am working on a (hopefully) easier to understand Dreamer architecture under the CleanRL library, toward also re-implementing Director, Dreamer-v3, and and JAX variant for faster training.
-
[P] Robust Policy Optimization is now in CleanRL 🔥!
Happy to share that CleanRL now has a new algorithm called Robust Policy Optimization — 5 lines of code change to PPO to get better performance in 57 out of 61 continuous action envs 🚀 (e.g., dm_control)
spinningup
-
REINFORCE algorithm implementation question
I am trying to follow a vanilla implementation of REINFORCE algorithm found in here: https://github.com/openai/spinningup/blob/master/spinup/examples/pytorch/pg_math/1_simple_pg.py
-
Why does OpenAI's implementation compute the log prob of an action before completely computing it?
I am looking at OpenAI's implementation of SAC over here. Also, here is their code to compute the action and its log prob -
- Onboarding at openai
- I have so many questions! You guys fascinate me.
-
Rationale for updating Value Function multiple times with same observations in spinninup's VPG-GAE implementation
In OpenAI's spinningup's VPG-GAE implementation , the authors update the value function V(s_t) multiple times at every epoch using the same batch of observations. Copying their code (line 237 onwards in link above):
-
Where to start? General advice for a hobby project
https://spinningup.openai.com/ is a good resource to start understanding the algorithms hands on by looking at their implementations.
-
Proximal Policy Optimization Network Implementation
so PPO can be used to produce deterministic outputs, as you mentioned the two values of the actor network can be those two deterministic values, no need to model a mean and variance, but if you want to get a normal distribution, spinning up intro RL check the section where they explain policies, and later the code where the implementation is, is in pytorch but they have a tensorflow as well, you can see there is comething called gaussian actor, in how they computed the log_prob and then get the mean and var to generate a normal.
-
Is there a particular reason why TD3 is outperforming SAC by a ton on a velocity and locomotion-based attitude control?
Take a look at the SpinningUp repo
What are some alternatives?
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
tianshou - An elegant PyTorch deep reinforcement learning library.
d3rlpy - An offline deep reinforcement learning library
reinforcement-learning-discord-wiki - The RL discord wiki
mbrl-lib - Library for Model Based RL
machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
sample-factory - High throughput synchronous and asynchronous reinforcement learning
wandb - 🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.
Deep-Reinforcement-Learning-Algorithms-with-PyTorch - PyTorch implementations of deep reinforcement learning algorithms and environments
deep_rl_zoo - A collection of Deep Reinforcement Learning algorithms implemented with PyTorch to solve Atari games and classic control tasks like CartPole, LunarLander, and MountainCar.
dm_env - A Python interface for reinforcement learning environments