recurrent-ppo-truncated-bptt
pytorch-a2c-ppo-acktr-gail
recurrent-ppo-truncated-bptt | pytorch-a2c-ppo-acktr-gail | |
---|---|---|
6 | 3 | |
106 | 3,423 | |
- | - | |
3.2 | 0.0 | |
11 days ago | almost 2 years ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
recurrent-ppo-truncated-bptt
-
What RL library supports custom LSTM and Transformer neural networks to use with algorithms such as PPO?
I provide baseline implementations on TransformerXL + PPO and LSTM/GRU + PPO. These are designed to be slim and easy-to-follow so that you can advance those implementations to the features and toolset that you need.
- How does a recurrent generator work in PPO?
- LSTM encoder in the policy?
-
what is the best approach to POMDP environment?
Second, when training a limited view agent in a tabular environment, I expected the rppo agent to perform better than cnn-based ppo. But it didn't. I used this repository that was already implemented and saw slow learning based on this.
- LSTM with SAC not learning well on tasks like Mountain Car and Lunar Lander?
- Recurrent PPO using truncated BPTT
pytorch-a2c-ppo-acktr-gail
-
How does advantage estimation is done when episodes are of variable length in PPO?
As an example look at "compute_returns" function here (and pay attention to how self.masks is used): https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/blob/master/a2c_ppo_acktr/storage.py
-
How to pretrain a model on expert data?
Try using an imitation learning algorithm. Two popular options are MaxEnt IRL and GAIL. This repository has GAIL implementation and this repository has MaxEnt IRL and GAIL implementation. There are other implementations too that you can check out.
-
Trying to Train PPO Agent for Pendulum-v0 from Pixel Inputs
For the PPO, I used this repo, which includes most tricks including GAE, normalized rewards, etc. I have verified this repo works for the traditional Pendulum-v0 task and Atari games (Pong and Breakout).
What are some alternatives?
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
pomdp-baselines - Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022
soft-actor-critic - Implementation of the Soft Actor Critic algorithm using Pytorch.
snakeAI - testing MLP, DQN, PPO, SAC, policy-gradient by snake
dm_control - Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
PPO-PyTorch - Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch
tensorforce - Tensorforce: a TensorFlow library for applied reinforcement learning
neroRL - Deep Reinforcement Learning Framework done with PyTorch
TensorFlow2.0-for-Deep-Reinforcement-Learning - TensorFlow 2.0 for Deep Reinforcement Learning. :octopus:
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
PCGrad - Code for "Gradient Surgery for Multi-Task Learning"