SaaSHub helps you find the best software and product alternatives Learn more →
Stable-baselines Alternatives
Similar projects and alternatives to stable-baselines
-
stable-baselines3
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Ray
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
-
rl-baselines3-zoo
A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
-
Super-mario-bros-PPO-pytorch
Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
-
Tic-Tac-Toe-Gym
This is the Tic-Tac-Toe game made with Python using the PyGame library and the Gym library to implement the AI with Reinforcement Learning
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
SuperSuit
A collection of wrappers for Gymnasium and PettingZoo environments (being merged into gymnasium.wrappers and pettingzoo.wrappers
-
-
soft-actor-critic
Implementation of the Soft Actor Critic algorithm using Pytorch. (by thomashirtz)
-
open-ai
OpenAI PHP SDK : Most downloaded, forked, contributed, huge community supported, and used PHP (Laravel , Symfony, Yii, Cake PHP or any PHP framework) SDK for OpenAI GPT-3 and DALL-E. It also supports chatGPT-like streaming. (ChatGPT AI is supported)
stable-baselines reviews and mentions
-
Distributed implementation tips
As underlined by gold-panda, you can give a try with multiprocessing. I once implemented a version based on what is done in stable_baselines v1 (https://github.com/hill-a/stable-baselines/blob/master/stable_baselines/common/vec_env/subproc_vec_env.py)
-
GAIL without actions?
Found relevant code at https://github.com/hill-a/stable-baselines + all code implementations here
-
Best framework to use if learning today
Depends what you wanna do. Universal answer would be https://stable-baselines.readthedocs.io/
-
weird mean reward graph
As you will see here it is recommended to augment this safety measure with target kl_divergence, that will ensure even smoother learning and enforce early stopping to prevent learning collapses.
-
Nvidia ISAAC gym/RL
Code for https://arxiv.org/abs/1707.06347 found: https://github.com/hill-a/stable-baselines
- Bounds for observation
-
Understanding multi agent learning in OpenAI gym and stable-baselines
I haven't read the code, but stable-baselines doesn't support multi-agent environments (https://github.com/hill-a/stable-baselines/issues/423), so I think they're trying to make learning multi-agent easier with Environment.train().
- Using Reinforment Learning to beat the first boss in Dark souls 3 with Proximal Policy Optimization
-
Reinforcement Learning Crash Course (Free)
- https://github.com/hill-a/stable-baselines (Tensorflow)
-
JAX Implementations of Actor-Critic Algorithms
- tf2 speed: https://github.com/hill-a/stable-baselines/issues/576#issuecomment-573331715
-
A note from our sponsor - SaaSHub
www.saashub.com | 18 Apr 2024
Stats
hill-a/stable-baselines is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of stable-baselines is Python.
Popular Comparisons
- stable-baselines VS stable-baselines3
- stable-baselines VS Ray
- stable-baselines VS rl-baselines3-zoo
- stable-baselines VS Super-mario-bros-PPO-pytorch
- stable-baselines VS Tic-Tac-Toe-Gym
- stable-baselines VS gym
- stable-baselines VS DI-engine
- stable-baselines VS kaggle-environments
- stable-baselines VS soft-actor-critic
- stable-baselines VS open-ai