neroRL
recurrent-ppo-truncated-bptt
neroRL | recurrent-ppo-truncated-bptt | |
---|---|---|
3 | 6 | |
26 | 106 | |
- | - | |
0.0 | 3.2 | |
7 months ago | 6 days ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
neroRL
-
Convergence of PPO
You can take a look at my implementation https://github.com/MarcoMeter/neroRL .
-
Recurrent PPO using truncated BPTT
this implementation does BPTT only, but concerning normal BP performance I can refer you to my framework neroRL. The develop branch contains the BPTT implementation of the above recurrent baseline. In neroRL you can easily toggle BPTT on/off. Both codes are pretty much the same. neroRL has more tools and supports more features (e.g. multi-discrete actions).
-
Implementing A Recurrent Ppo Policy To Solve
If it is related to the recurrent policy implementation then the function for that is defined here. I merged the current state of the recurrent policy to the develop branch.
recurrent-ppo-truncated-bptt
-
What RL library supports custom LSTM and Transformer neural networks to use with algorithms such as PPO?
I provide baseline implementations on TransformerXL + PPO and LSTM/GRU + PPO. These are designed to be slim and easy-to-follow so that you can advance those implementations to the features and toolset that you need.
- How does a recurrent generator work in PPO?
- LSTM encoder in the policy?
-
what is the best approach to POMDP environment?
Second, when training a limited view agent in a tabular environment, I expected the rppo agent to perform better than cnn-based ppo. But it didn't. I used this repository that was already implemented and saw slow learning based on this.
- LSTM with SAC not learning well on tasks like Mountain Car and Lunar Lander?
- Recurrent PPO using truncated BPTT
What are some alternatives?
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
pomdp-baselines - Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022
snakeAI - testing MLP, DQN, PPO, SAC, policy-gradient by snake
PPO-PyTorch - Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch
pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
ppo-implementation-details - The source code for the blog post The 37 Implementation Details of Proximal Policy Optimization
popgym - Partially Observable Process Gym
episodic-transformer-memory-ppo - Clean baseline implementation of PPO using an episodic TransformerXL memory
gym-continuousDoubleAuction - A custom MARL (multi-agent reinforcement learning) environment where multiple agents trade against one another (self-play) in a zero-sum continuous double auction. Ray [RLlib] is used for training.