SaaSHub helps you find the best software and product alternatives Learn more →
Stable-baselines3 Alternatives
Similar projects and alternatives to stable-baselines3
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
Ray
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
-
cleanrl
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
-
PettingZoo
An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
-
-
-
-
rl-baselines3-zoo
A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
-
agents
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
-
-
-
SuperSuit
A collection of wrappers for Gymnasium and PettingZoo environments (being merged into gymnasium.wrappers and pettingzoo.wrappers
-
stable-baselines3-contrib
Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code
-
-
-
-
-
RL-Adventure
Pytorch Implementation of DQN / DDQN / Prioritized replay/ noisy networks/ distributional values/ Rainbow/ hierarchical RL
-
Tic-Tac-Toe-Gym
This is the Tic-Tac-Toe game made with Python using the PyGame library and the Gym library to implement the AI with Reinforcement Learning
stable-baselines3 discussion
stable-baselines3 reviews and mentions
-
Sim-to-real RL pipeline for open-source wheeled bipeds
The latest release (v3.0.0) of Upkie's software brings a functional sim-to-real reinforcement learning pipeline based on Stable Baselines3, with standard sim-to-real tricks. The pipeline trains on the Gymnasium environments distributed in upkie.envs (setup: pip install upkie) and is implemented in the PPO balancer. Here is a policy running on an Upkie:
-
[P] PettingZoo 1.24.0 has been released (including Stable-Baselines3 tutorials)
PettingZoo 1.24.0 is now live! This release includes Python 3.11 support, updated Chess and Hanabi environment versions, and many bugfixes, documentation updates and testing expansions. We are also very excited to announce 3 tutorials using Stable-Baselines3, and a full training script using CleanRL with TensorBoard and WandB.
-
[Question] Why there is so few algorithms implemented in SB3?
I am wondering why there is so few algorithms in Stable Baselines 3 (SB3, https://github.com/DLR-RM/stable-baselines3/tree/master)? I was expecting some algorithms like ICM, HIRO, DIAYN, ... Why there is no model-based, skill-chaining, hierarchical-RL, ... algorithms implemented there?
-
Stable baselines! Where my people at?
Discord is more focused, and they have a page for people who wants to contribute https://github.com/DLR-RM/stable-baselines3/blob/master/CONTRIBUTING.md
-
SB3 - NotImplementedError: Box([-1. -1. -8.], [1. 1. 8.], (3,), <class 'numpy.float32'>) observation space is not supported
Therefore, I debugged this error to the ReplayBuffer that was imported from `SB3`. This is the problem function -
- Exporting an A2C model created with stable-baselines3 to PyTorch
-
Shimmy 1.0: Gymnasium & PettingZoo bindings for popular external RL environments
Have you ever wanted to use dm-control with stable-baselines3? Within Reinforcement learning (RL), a number of APIs are used to implement environments, with limited ability to convert between them. This makes training agents across different APIs highly difficult, and has resulted in a fractured ecosystem.
-
Stable-Baselines3 v1.8 Release
Changelog: https://github.com/DLR-RM/stable-baselines3/releases/tag/v1.8.0
-
[P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up
Great project! One question though, is there any reason why you are not using existing RL models instead of creating your own, such as stable baselines?
- Is stable-baselines3 compatible with gymnasium/gymnasium-robotics?
-
A note from our sponsor - SaaSHub
www.saashub.com | 4 Dec 2024
Stats
DLR-RM/stable-baselines3 is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of stable-baselines3 is Python.
Popular Comparisons
- stable-baselines3 VS Ray
- stable-baselines3 VS stable-baselines
- stable-baselines3 VS Pytorch
- stable-baselines3 VS cleanrl
- stable-baselines3 VS tianshou
- stable-baselines3 VS Super-mario-bros-PPO-pytorch
- stable-baselines3 VS ElegantRL
- stable-baselines3 VS Tic-Tac-Toe-Gym
- stable-baselines3 VS SuperSuit
- stable-baselines3 VS rl-baselines3-zoo