orbit
skrl
orbit | skrl | |
---|---|---|
3 | 7 | |
822 | 410 | |
19.5% | - | |
9.5 | 8.7 | |
6 days ago | 2 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
orbit
- NVIDIA Orbit - Unified framework for robot learning
- Nvidia-Omniverse/Orbit: Unified framework for robot learning
-
What is the limit on parallel environments?
Although Gym/Gymnasium allows you to generate vectorized parallel environments, if you want to train in hundreds or thousands of environments you will need to use the NVIDIA simulator repertoire (Isaac Gym, Isaac Orbit or Omniverse Isaac Gym).
skrl
-
Isaac Gym with Off-policy Algorithms
skrl will allow you to easily configure and use off-policy algorithms such as DDPG, TD3 and SAC in Isaac Gym, Omniverse Isaac Gym and Isaac Orbit, but I think there will not be significant gains compared to on-policy algorithms.
-
Choosing a framework in 2023
Check its comprehensive documentation at https://skrl.readthedocs.io
-
Best recurrent RL library?
Also, skrl. It supports RNN, LSTM, GRU, and other variants for A2C, DDPG, PPO, SAC, TD3, and TRPO agents. See the models basic usage and examples
-
What is the limit on parallel environments?
In this case, I encourage you to try the skrl RL library that fully supports all of them, among others.
-
What's the best "Non-Black Box" framework for SOTA algorithms?
I encourage you to try skrl (https://skrl.readthedocs.io).
-
I have a PPO implementation but I am pretty sure it wrong. I need this correct because I would like to add LSTM layer over this. Could someone have a look?
I encourage you to take a look at the skrl library...
-
Can we use RNN in RL?
This is the list of examples (to be included in the documentation) that includes RNN: (ddpg_gym_pendulumnovel_gru.py, ddpg_gym_pendulumnovel_lstm.py, ddpg_gym_pendulumnovel_rnn.py, etc.)... and here are some RNN benchmarking results (to be updated for the release)
What are some alternatives?
OmniIsaacGymEnvs - Reinforcement Learning Environments for Omniverse Isaac Gym
IsaacGymEnvs - Isaac Gym Reinforcement Learning Environments
awesome-isaac-gym - A curated list of awesome NVIDIA Issac Gym frameworks, papers, software, and resources
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
pfrl - PFRL: a PyTorch-based deep reinforcement learning library
PythonRobotics - Python sample codes for robotics algorithms.
pomdp-baselines - Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
autonomous-learning-library - A PyTorch library for building deep reinforcement learning agents.
pytorch-blender - :sweat_drops: Seamless, distributed, real-time integration of Blender into PyTorch data pipelines