DI-engine
popgym
DI-engine | popgym | |
---|---|---|
3 | 4 | |
2,553 | 145 | |
5.7% | 6.3% | |
8.7 | 6.1 | |
10 days ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DI-engine
-
Anyone have experience with DI-Engine?
I posted a while back asking people what frameworks they were using for RL research. Recently i stumbled upon DI-Engine which looks promising! Actively maintained, with a diverse set of algorithms already implemented.
-
TransformerXL + PPO Baseline + MemoryGym
DI Engine
- Struggling with algorithm generality? Try DI engine; here is the solution
popgym
-
What RL library supports custom LSTM and Transformer neural networks to use with algorithms such as PPO?
POPGym is based on RLlib and has two linear transformers and five or six RNN variants, including LSTM. I've found that transformers tend to perform pretty poorly in RL when compared to RNNs.
-
POPGym: Partially Observable Reinforcement Learning
Code: https://github.com/proroklab/popgym
-
TransformerXL + PPO Baseline + MemoryGym
Have you seen this other ICLR paper, POPGym? Paper: https://openreview.net/forum?id=chDrutUTs0K Code: https://github.com/smorad/popgym
-
Partially observable Continuous Control Gym Environment
https://github.com/smorad/popgym contains 15 partially observable gym environments, but they use discrete actino spaces. I've verified that memoryless models (e.g. PPO+MLP) cannot solve these tasks, except for the navigation ones.
What are some alternatives?
stable-baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
recurrent-ppo-truncated-bptt - Baseline implementation of recurrent PPO using truncated BPTT
pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
brain-agent - Brain Agent for Large-Scale and Multi-Task Agent Learning
tianshou - An elegant PyTorch deep reinforcement learning library.
episodic-transformer-memory-ppo - Clean baseline implementation of PPO using an episodic TransformerXL memory
seed_rl - SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference. Implements IMPALA and R2D2 algorithms in TF2 with SEED's architecture.
ppo-implementation-details - The source code for the blog post The 37 Implementation Details of Proximal Policy Optimization
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
adaptive-transformers-in-rl - Adaptive Attention Span for Reinforcement Learning
on-policy - This is the official implementation of Multi-Agent PPO (MAPPO).
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.