recurrent-ppo-truncated-bptt
PPO-PyTorch
recurrent-ppo-truncated-bptt | PPO-PyTorch | |
---|---|---|
6 | 2 | |
106 | 1,483 | |
- | - | |
3.2 | 2.8 | |
11 days ago | 5 months ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
recurrent-ppo-truncated-bptt
-
What RL library supports custom LSTM and Transformer neural networks to use with algorithms such as PPO?
I provide baseline implementations on TransformerXL + PPO and LSTM/GRU + PPO. These are designed to be slim and easy-to-follow so that you can advance those implementations to the features and toolset that you need.
- How does a recurrent generator work in PPO?
- LSTM encoder in the policy?
-
what is the best approach to POMDP environment?
Second, when training a limited view agent in a tabular environment, I expected the rppo agent to perform better than cnn-based ppo. But it didn't. I used this repository that was already implemented and saw slow learning based on this.
- LSTM with SAC not learning well on tasks like Mountain Car and Lunar Lander?
- Recurrent PPO using truncated BPTT
PPO-PyTorch
-
Where does the loss function for Policy Gradient come from?
It's just very convient implementation wise, in just a few lines you can get the "loss": (from https://github.com/nikhilbarhate99/PPO-PyTorch/blob/master/PPO.py)
-
A2C/PPO with continuous action space
In some methods, like the one here, the actor network has two heads, one for the mean and one for the variance. In other methods, like the one here, the network only outputs the mean, while the variance is pre-defined and is decaying throughout the training.
What are some alternatives?
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
HandyRL - HandyRL is a handy and simple framework based on Python and PyTorch for distributed reinforcement learning that is applicable to your own environments.
pomdp-baselines - Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022
l2rpn-baselines - L2RPN Baselines a repository to host baselines for l2rpn competitions.
snakeAI - testing MLP, DQN, PPO, SAC, policy-gradient by snake
Pytorch-PCGrad - Pytorch reimplementation for "Gradient Surgery for Multi-Task Learning"
neroRL - Deep Reinforcement Learning Framework done with PyTorch
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
pytorch-accelerated - A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop which is flexible enough to handle the majority of use cases, and capable of utilizing different hardware options with no code changes required. Docs: https://pytorch-accelerated.readthedocs.io/en/latest/
nes-torch - Minimal PyTorch Library for Natural Evolution Strategies