pomdp-baselines
Fleet-AI
pomdp-baselines | Fleet-AI | |
---|---|---|
5 | 1 | |
275 | 3 | |
- | - | |
4.3 | 0.0 | |
7 months ago | over 2 years ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pomdp-baselines
- Best recurrent RL library?
-
In Latest Machine Learning Research, A Group at CMU Release a Simple and Efficient Implementation of Recurrent Model-Free Reinforcement Learning (RL) for Future Work to Use as a Baseline for POMDP Algorithms
Continue reading| Check out the paper, github link, project and reference article.
-
[R] Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Code for https://arxiv.org/abs/2110.05038 found: https://github.com/twni2016/pomdp-baselines
Fleet-AI
-
Playing Battleship with RL (github in comments)
GitHub Link
What are some alternatives?
tianshou - An elegant PyTorch deep reinforcement learning library.
DeepRL-TensorFlow2 - 🐋 Simple implementations of various popular Deep Reinforcement Learning algorithms using TensorFlow2
ElegantRL - Massively Parallel Deep Reinforcement Learning. 🔥
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
minimalRL - Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
autonomous-learning-library - A PyTorch library for building deep reinforcement learning agents.
recurrent-ppo-truncated-bptt - Baseline implementation of recurrent PPO using truncated BPTT