rl-baselines3-zoo
stable-baselines
Our great sponsors
rl-baselines3-zoo | stable-baselines | |
---|---|---|
11 | 10 | |
1,777 | 4,000 | |
5.0% | - | |
6.3 | 0.0 | |
23 days ago | over 1 year ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rl-baselines3-zoo
-
Can't solve MountainCar-v0 with A2C algorithm (stable-baselines3)
I'm trying to solve MountainCar-v0 enviroment from gymnasium with the A2C algorithm and the agent doesn't find a solution. I checked this so I added import stable_baselines3.common.sb2_compat.rmsprop_tf_like as RMSpropTFLike. Also checked the rl-baselines3-zoo for the hyperparameter tuning. So my code is:
-
Stable-Baselines3 v2.0: Gymnasium Support
RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo
-
Tips and Tricks for RL from Experimental Data using Stable Baselines3 Zoo
I'm still new to the domain but wanted to shared some experimental data I've gathered from massive amount of experimentation. I don't have a strong understanding of the theory as I'm more of a software engineer than data scientist, but perhaps this will help other implementers. These notes are based on Stable Baselines 3 and RL Baselines3 Zoo with using PPO+LSTM (should apply generally to all the algos for the most part)
-
Simple continuous environment with spaceship but yet challenging for RL algorithms (like SAC, TD3)
Try hyperparameter search. It's implemented here: https://github.com/DLR-RM/rl-baselines3-zoo for stable-baselines3. Hyperparameters make a huge difference in RL, much more than in supervised learning.
-
Easily load and upload Stable-baselines3 models from the Hugging Face Hub 🤗
Integrating RL-baselines3-zoo
-
Help comparing Double DQN against another paper's results
Hello, I've been running some tests of Double DQN with Stable Baselines 3 Zoo and to compare I'm using the graphs provided by Noisy Networks For Exploration.
-
DDPG not solving MountainCarContinuous
- you can find tuned hyperparameters for DDPG, SAC, PPO in https://github.com/DLR-RM/rl-baselines3-zoo
-
Hyperparameter tuning examples
For more complete implementation: https://github.com/DLR-RM/rl-baselines3-zoo
-
How do I convert zoo / gym trained models to TensorFlow Lite or PyTorch TorchScript?
https://github.com/DLR-RM/rl-baselines3-zoo (PyTorch based, using https://github.com/DLR-RM/stable-baselines3)
-
[P] Stable-Baselines3 v1.0 - Reliable implementations of RL algorithms
We also release 100+ trained models in our experimental framework, the rl zoo: https://github.com/DLR-RM/rl-baselines3-zoo
stable-baselines
-
Distributed implementation tips
As underlined by gold-panda, you can give a try with multiprocessing. I once implemented a version based on what is done in stable_baselines v1 (https://github.com/hill-a/stable-baselines/blob/master/stable_baselines/common/vec_env/subproc_vec_env.py)
-
GAIL without actions?
Found relevant code at https://github.com/hill-a/stable-baselines + all code implementations here
-
Best framework to use if learning today
Depends what you wanna do. Universal answer would be https://stable-baselines.readthedocs.io/
-
weird mean reward graph
As you will see here it is recommended to augment this safety measure with target kl_divergence, that will ensure even smoother learning and enforce early stopping to prevent learning collapses.
-
Nvidia ISAAC gym/RL
Code for https://arxiv.org/abs/1707.06347 found: https://github.com/hill-a/stable-baselines
- Bounds for observation
-
Understanding multi agent learning in OpenAI gym and stable-baselines
I haven't read the code, but stable-baselines doesn't support multi-agent environments (https://github.com/hill-a/stable-baselines/issues/423), so I think they're trying to make learning multi-agent easier with Environment.train().
- Using Reinforment Learning to beat the first boss in Dark souls 3 with Proximal Policy Optimization
-
Reinforcement Learning Crash Course (Free)
- https://github.com/hill-a/stable-baselines (Tensorflow)
-
JAX Implementations of Actor-Critic Algorithms
- tf2 speed: https://github.com/hill-a/stable-baselines/issues/576#issuecomment-573331715
What are some alternatives?
optuna - A hyperparameter optimization framework
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
gym-pybullet-drones - PyBullet Gymnasium environments for single and multi-agent reinforcement learning of quadcopter control
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
rl-baselines-zoo - A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.
Tic-Tac-Toe-Gym - This is the Tic-Tac-Toe game made with Python using the PyGame library and the Gym library to implement the AI with Reinforcement Learning
pybullet-gym - Open-source implementations of OpenAI Gym MuJoCo environments for use with the OpenAI Gym Reinforcement Learning Research Platform.
DI-engine - OpenDILab Decision AI Engine
rl-trained-agents - A collection of pre-trained RL agents using Stable Baselines3
gym