Our great sponsors
-
cleanrl
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
-
stable-baselines3
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I am trying to run cleanrl on the `Pendulum-v1` environment. I did that by going here and changing the default `env-id` to ` parser.add_argument("--env-id", type=str, default="Pendulum-v1",
Therefore, I debugged this error to the ReplayBuffer that was imported from `SB3`. This is the problem function -
Related posts
- [P] PettingZoo 1.24.0 has been released (including Stable-Baselines3 tutorials)
- [P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up
- Best PyTorch RL library for doing research
- Open RL Benchmark by CleanRL 0.5.0
- Show HN: An end-to-end reinforcement learning library for infinite horizon tasks