pomdp-baselines
skrl
pomdp-baselines | skrl | |
---|---|---|
5 | 7 | |
275 | 404 | |
- | - | |
4.3 | 9.3 | |
8 months ago | 17 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pomdp-baselines
- Best recurrent RL library?
-
In Latest Machine Learning Research, A Group at CMU Release a Simple and Efficient Implementation of Recurrent Model-Free Reinforcement Learning (RL) for Future Work to Use as a Baseline for POMDP Algorithms
Continue reading| Check out the paper, github link, project and reference article.
-
[R] Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Code for https://arxiv.org/abs/2110.05038 found: https://github.com/twni2016/pomdp-baselines
skrl
-
Isaac Gym with Off-policy Algorithms
skrl will allow you to easily configure and use off-policy algorithms such as DDPG, TD3 and SAC in Isaac Gym, Omniverse Isaac Gym and Isaac Orbit, but I think there will not be significant gains compared to on-policy algorithms.
-
Choosing a framework in 2023
Check its comprehensive documentation at https://skrl.readthedocs.io
-
Best recurrent RL library?
Also, skrl. It supports RNN, LSTM, GRU, and other variants for A2C, DDPG, PPO, SAC, TD3, and TRPO agents. See the models basic usage and examples
-
What is the limit on parallel environments?
In this case, I encourage you to try the skrl RL library that fully supports all of them, among others.
-
What's the best "Non-Black Box" framework for SOTA algorithms?
I encourage you to try skrl (https://skrl.readthedocs.io).
-
I have a PPO implementation but I am pretty sure it wrong. I need this correct because I would like to add LSTM layer over this. Could someone have a look?
I encourage you to take a look at the skrl library...
-
Can we use RNN in RL?
This is the list of examples (to be included in the documentation) that includes RNN: (ddpg_gym_pendulumnovel_gru.py, ddpg_gym_pendulumnovel_lstm.py, ddpg_gym_pendulumnovel_rnn.py, etc.)... and here are some RNN benchmarking results (to be updated for the release)
What are some alternatives?
tianshou - An elegant PyTorch deep reinforcement learning library.
IsaacGymEnvs - Isaac Gym Reinforcement Learning Environments
ElegantRL - Massively Parallel Deep Reinforcement Learning. 🔥
awesome-isaac-gym - A curated list of awesome NVIDIA Issac Gym frameworks, papers, software, and resources
pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
pfrl - PFRL: a PyTorch-based deep reinforcement learning library
DeepRL-TensorFlow2 - 🐋 Simple implementations of various popular Deep Reinforcement Learning algorithms using TensorFlow2
OmniIsaacGymEnvs - Reinforcement Learning Environments for Omniverse Isaac Gym
minimalRL - Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
recurrent-ppo-truncated-bptt - Baseline implementation of recurrent PPO using truncated BPTT
autonomous-learning-library - A PyTorch library for building deep reinforcement learning agents.