voxelgym2D
skrl
voxelgym2D | skrl | |
---|---|---|
1 | 7 | |
11 | 417 | |
- | - | |
7.4 | 8.7 | |
about 1 month ago | 17 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
voxelgym2D
-
RL based pathplanning
Hey guys. I just started learning more about training DRL agents to solve path planning problems. I have setup a small 2D gym environment to test out ideas. Here is the URL to the GitHub page: https://github.com/harisankar95/voxelgym2D
skrl
-
Isaac Gym with Off-policy Algorithms
skrl will allow you to easily configure and use off-policy algorithms such as DDPG, TD3 and SAC in Isaac Gym, Omniverse Isaac Gym and Isaac Orbit, but I think there will not be significant gains compared to on-policy algorithms.
-
Choosing a framework in 2023
Check its comprehensive documentation at https://skrl.readthedocs.io
-
Best recurrent RL library?
Also, skrl. It supports RNN, LSTM, GRU, and other variants for A2C, DDPG, PPO, SAC, TD3, and TRPO agents. See the models basic usage and examples
-
What is the limit on parallel environments?
In this case, I encourage you to try the skrl RL library that fully supports all of them, among others.
-
What's the best "Non-Black Box" framework for SOTA algorithms?
I encourage you to try skrl (https://skrl.readthedocs.io).
-
I have a PPO implementation but I am pretty sure it wrong. I need this correct because I would like to add LSTM layer over this. Could someone have a look?
I encourage you to take a look at the skrl library...
-
Can we use RNN in RL?
This is the list of examples (to be included in the documentation) that includes RNN: (ddpg_gym_pendulumnovel_gru.py, ddpg_gym_pendulumnovel_lstm.py, ddpg_gym_pendulumnovel_rnn.py, etc.)... and here are some RNN benchmarking results (to be updated for the release)
What are some alternatives?
AgileRL - Streamlining reinforcement learning with RLOps. State-of-the-art RL algorithms and tools.
IsaacGymEnvs - Isaac Gym Reinforcement Learning Environments
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
awesome-isaac-gym - A curated list of awesome NVIDIA Issac Gym frameworks, papers, software, and resources
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
pfrl - PFRL: a PyTorch-based deep reinforcement learning library
stable-baselines - Mirror of Stable-Baselines: a fork of OpenAI Baselines, implementations of reinforcement learning algorithms
OmniIsaacGymEnvs - Reinforcement Learning Environments for Omniverse Isaac Gym
stable-baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
pomdp-baselines - Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022
autonomous-learning-library - A PyTorch library for building deep reinforcement learning agents.