rl-baselines-zoo
stable-baselines3-contrib
rl-baselines-zoo | stable-baselines3-contrib | |
---|---|---|
2 | 6 | |
1,106 | 427 | |
- | 4.0% | |
0.0 | 6.7 | |
over 1 year ago | about 1 month ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rl-baselines-zoo
-
Agent trains great with PPO but terrible with SAC --> Advice for Hyperparameters
Take a look at these tuned sets of hyperparameters for various problems in PPO and SAC. The batch sizes are WAY smaller regardless of the problem. Your initial learning rate may also be too high.
-
How do I convert zoo / gym trained models to TensorFlow Lite or PyTorch TorchScript?
https://github.com/araffin/rl-baselines-zoo (TensorFlow based, using https://github.com/hill-a/stable-baselines)
stable-baselines3-contrib
-
Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
# https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/blob/master/sb3_contrib/tqc/tqc.py :
-
Understanding Action Masking in RLlib
Here's a theoretical overview and an implementation of action masking for PPO.
-
PPO rollout buffer for turn-based two-player game with varying turn lengths
Simplified version of rollout collection (adapted from ppo_mask.py line 282):
-
GitHub Copilot: your AI pair programmer
Transformers (GPT-3) aren't quite _supervised_, but it does require valid samples.
Agree 100% with RL being the path forward. You probably have already seen ( https://venturebeat.com/2021/06/09/deepmind-says-reinforceme... ). Personally I'm really stoked for this https://github.com/Stable-Baselines-Team/stable-baselines3-c... , which will make it a lot easier for rubes like me to use RL.
-
[P] Stable-Baselines3 v1.0 - Reliable implementations of RL algorithms
But as we already have vanilla DQN and QR-DQN (in our contrib repo: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ) I think it is already a good start for off-policy discrete action algorithms. (QR-DQN is usually competitive vs DQN+extensions)
What are some alternatives?
rl-baselines3-zoo - A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
muzero-general - MuZero
Minigrid - Simple and easily configurable grid world environments for reinforcement learning
TabNine - AI Code Completions
seed_rl - SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference. Implements IMPALA and R2D2 algorithms in TF2 with SEED's architecture.
stable-baselines3-c
pybullet-gym - Open-source implementations of OpenAI Gym MuJoCo environments for use with the OpenAI Gym Reinforcement Learning Research Platform.
copilot-cli - The AWS Copilot CLI is a tool for developers to build, release and operate production ready containerized applications on AWS App Runner or Amazon ECS on AWS Fargate.
pytorch-blender - :sweat_drops: Seamless, distributed, real-time integration of Blender into PyTorch data pipelines
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
dreamerv2 - Mastering Atari with Discrete World Models