stable-baselines
gym
Our great sponsors
stable-baselines | gym | |
---|---|---|
10 | 1 | |
4,000 | - | |
- | - | |
0.0 | - | |
over 1 year ago | - | |
Python | ||
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-baselines
-
Distributed implementation tips
As underlined by gold-panda, you can give a try with multiprocessing. I once implemented a version based on what is done in stable_baselines v1 (https://github.com/hill-a/stable-baselines/blob/master/stable_baselines/common/vec_env/subproc_vec_env.py)
-
GAIL without actions?
Found relevant code at https://github.com/hill-a/stable-baselines + all code implementations here
-
Best framework to use if learning today
Depends what you wanna do. Universal answer would be https://stable-baselines.readthedocs.io/
-
weird mean reward graph
As you will see here it is recommended to augment this safety measure with target kl_divergence, that will ensure even smoother learning and enforce early stopping to prevent learning collapses.
-
Nvidia ISAAC gym/RL
Code for https://arxiv.org/abs/1707.06347 found: https://github.com/hill-a/stable-baselines
- Bounds for observation
-
Understanding multi agent learning in OpenAI gym and stable-baselines
I haven't read the code, but stable-baselines doesn't support multi-agent environments (https://github.com/hill-a/stable-baselines/issues/423), so I think they're trying to make learning multi-agent easier with Environment.train().
- Using Reinforment Learning to beat the first boss in Dark souls 3 with Proximal Policy Optimization
-
Reinforcement Learning Crash Course (Free)
- https://github.com/hill-a/stable-baselines (Tensorflow)
-
JAX Implementations of Actor-Critic Algorithms
- tf2 speed: https://github.com/hill-a/stable-baselines/issues/576#issuecomment-573331715
gym
-
Nvidia ISAAC gym/RL
Code for https://arxiv.org/abs/1606.01540 found: https://github.com/kodamanbou/gym
What are some alternatives?
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
rl-baselines3-zoo - A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
Tic-Tac-Toe-Gym - This is the Tic-Tac-Toe game made with Python using the PyGame library and the Gym library to implement the AI with Reinforcement Learning
DI-engine - OpenDILab Decision AI Engine
kaggle-environments
soft-actor-critic - Implementation of the Soft Actor Critic algorithm using Pytorch.
open-ai - OpenAI PHP SDK : Most downloaded, forked, contributed, huge community supported, and used PHP (Laravel , Symfony, Yii, Cake PHP or any PHP framework) SDK for OpenAI GPT-3 and DALL-E. It also supports chatGPT-like streaming. (ChatGPT AI is supported)
SuperSuit - A collection of wrappers for Gymnasium and PettingZoo environments (being merged into gymnasium.wrappers and pettingzoo.wrappers
gym-battleship - Battleship environment for reinforcement learning tasks