stable-baselines3
rlpyt
stable-baselines3 | rlpyt | |
---|---|---|
46 | 4 | |
7,953 | 2,197 | |
3.1% | - | |
8.2 | 0.0 | |
7 days ago | over 3 years ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-baselines3
-
Sim-to-real RL pipeline for open-source wheeled bipeds
The latest release (v3.0.0) of Upkie's software brings a functional sim-to-real reinforcement learning pipeline based on Stable Baselines3, with standard sim-to-real tricks. The pipeline trains on the Gymnasium environments distributed in upkie.envs (setup: pip install upkie) and is implemented in the PPO balancer. Here is a policy running on an Upkie:
-
[P] PettingZoo 1.24.0 has been released (including Stable-Baselines3 tutorials)
PettingZoo 1.24.0 is now live! This release includes Python 3.11 support, updated Chess and Hanabi environment versions, and many bugfixes, documentation updates and testing expansions. We are also very excited to announce 3 tutorials using Stable-Baselines3, and a full training script using CleanRL with TensorBoard and WandB.
-
[Question] Why there is so few algorithms implemented in SB3?
I am wondering why there is so few algorithms in Stable Baselines 3 (SB3, https://github.com/DLR-RM/stable-baselines3/tree/master)? I was expecting some algorithms like ICM, HIRO, DIAYN, ... Why there is no model-based, skill-chaining, hierarchical-RL, ... algorithms implemented there?
-
Stable baselines! Where my people at?
Discord is more focused, and they have a page for people who wants to contribute https://github.com/DLR-RM/stable-baselines3/blob/master/CONTRIBUTING.md
-
SB3 - NotImplementedError: Box([-1. -1. -8.], [1. 1. 8.], (3,), <class 'numpy.float32'>) observation space is not supported
Therefore, I debugged this error to the ReplayBuffer that was imported from `SB3`. This is the problem function -
- Exporting an A2C model created with stable-baselines3 to PyTorch
-
Shimmy 1.0: Gymnasium & PettingZoo bindings for popular external RL environments
Have you ever wanted to use dm-control with stable-baselines3? Within Reinforcement learning (RL), a number of APIs are used to implement environments, with limited ability to convert between them. This makes training agents across different APIs highly difficult, and has resulted in a fractured ecosystem.
-
Stable-Baselines3 v1.8 Release
Changelog: https://github.com/DLR-RM/stable-baselines3/releases/tag/v1.8.0
-
[P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up
Great project! One question though, is there any reason why you are not using existing RL models instead of creating your own, such as stable baselines?
- Is stable-baselines3 compatible with gymnasium/gymnasium-robotics?
rlpyt
-
About Prior Action Distribution in Entropy Regularized Actor-Critic Methods
The above example is from rlpyt library's SAC algorithm.
-
Best PyTorch RL library for doing research
I borrow a lot of performance tricks from sample factory, which is awesome but hard to modify from its original APPO algorithm. rlpyt was more modular, and I borrowed more ideas from it (namedarraytuple), but still too limited.
-
Spec for RL agent implementation?
rlpyt also has abstractions for additional things besides environments: https://github.com/astooke/rlpyt
-
PPO+LSTM Implementation
rlpyt is a library I’m studying right now, could be worth a shot; the code base is somewhat complex but after some reading it’s not so bad :)
What are some alternatives?
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
gym - A toolkit for developing and comparing reinforcement learning algorithms.
stable-baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
tianshou - An elegant PyTorch deep reinforcement learning library.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
minimalRL - Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
acme - A library of reinforcement learning components and agents
sample-factory - High throughput synchronous and asynchronous reinforcement learning
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros