pfrl
skrl
pfrl | skrl | |
---|---|---|
3 | 7 | |
1,149 | 413 | |
1.4% | - | |
4.6 | 8.7 | |
14 days ago | 10 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pfrl
-
Choosing a framework in 2023
PFRL https://github.com/pfnet/pfrl
- [P] Can I separate out the steps of learn() in stable baselines3?
-
Applied resources in Pytorch?
Take a look at https://github.com/pfnet/pfrl/blob/master/examples/quickstart/quickstart.ipynb
skrl
-
Isaac Gym with Off-policy Algorithms
skrl will allow you to easily configure and use off-policy algorithms such as DDPG, TD3 and SAC in Isaac Gym, Omniverse Isaac Gym and Isaac Orbit, but I think there will not be significant gains compared to on-policy algorithms.
-
Choosing a framework in 2023
Check its comprehensive documentation at https://skrl.readthedocs.io
-
Best recurrent RL library?
Also, skrl. It supports RNN, LSTM, GRU, and other variants for A2C, DDPG, PPO, SAC, TD3, and TRPO agents. See the models basic usage and examples
-
What is the limit on parallel environments?
In this case, I encourage you to try the skrl RL library that fully supports all of them, among others.
-
What's the best "Non-Black Box" framework for SOTA algorithms?
I encourage you to try skrl (https://skrl.readthedocs.io).
-
I have a PPO implementation but I am pretty sure it wrong. I need this correct because I would like to add LSTM layer over this. Could someone have a look?
I encourage you to take a look at the skrl library...
-
Can we use RNN in RL?
This is the list of examples (to be included in the documentation) that includes RNN: (ddpg_gym_pendulumnovel_gru.py, ddpg_gym_pendulumnovel_lstm.py, ddpg_gym_pendulumnovel_rnn.py, etc.)... and here are some RNN benchmarking results (to be updated for the release)
What are some alternatives?
acme - A library of reinforcement learning components and agents
IsaacGymEnvs - Isaac Gym Reinforcement Learning Environments
RL-Adventure - Pytorch Implementation of DQN / DDQN / Prioritized replay/ noisy networks/ distributional values/ Rainbow/ hierarchical RL
awesome-isaac-gym - A curated list of awesome NVIDIA Issac Gym frameworks, papers, software, and resources
OmniIsaacGymEnvs - Reinforcement Learning Environments for Omniverse Isaac Gym
pomdp-baselines - Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
autonomous-learning-library - A PyTorch library for building deep reinforcement learning agents.
orbit - Unified framework for robot learning built on NVIDIA Isaac Sim
pytorch-blender - :sweat_drops: Seamless, distributed, real-time integration of Blender into PyTorch data pipelines
dmc2gymnasium - Gymnasium integration for the DeepMind Control (DMC) suite
actorch - Deep reinforcement learning framework for fast prototyping based on PyTorch