pytorch-a2c-ppo-acktr-gail
metaworld
pytorch-a2c-ppo-acktr-gail | metaworld | |
---|---|---|
3 | 2 | |
3,423 | 829 | |
- | - | |
0.0 | 3.5 | |
almost 2 years ago | over 1 year ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytorch-a2c-ppo-acktr-gail
-
How does advantage estimation is done when episodes are of variable length in PPO?
As an example look at "compute_returns" function here (and pay attention to how self.masks is used): https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/blob/master/a2c_ppo_acktr/storage.py
-
How to pretrain a model on expert data?
Try using an imitation learning algorithm. Two popular options are MaxEnt IRL and GAIL. This repository has GAIL implementation and this repository has MaxEnt IRL and GAIL implementation. There are other implementations too that you can check out.
-
Trying to Train PPO Agent for Pendulum-v0 from Pixel Inputs
For the PPO, I used this repo, which includes most tricks including GAE, normalized rewards, etc. I have verified this repo works for the traditional Pendulum-v0 task and Atari games (Pong and Breakout).
metaworld
-
Are there any follow-up studies of RL^2 algorithms?
Hi r/reinforcementlearning! I recently started to be interested in meta-reinforcement learning, and I am particularly interested in models using recurrent neural networks such as RL2. But after few search I found that most of the recent approach for meta-reinforcement learning is based on MARL method, Although RL2 performed very well in meta rl benchmark paper, meta-world. And it was hard to find follow-up research of RL2 at the same time. Does anyone knows about follow-up researches of RL2?
-
[D] Creating benchmarks for reinforcement learning
How long does it take to write a benchmark for RL like meta-world (https://github.com/rlworkgroup/metaworld) or multiagent emergence environments (https://github.com/openai/multi-agent-emergence-environments)?
What are some alternatives?
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
garage - A toolkit for reproducible reinforcement learning research.
soft-actor-critic - Implementation of the Soft Actor Critic algorithm using Pytorch.
tianshou - An elegant PyTorch deep reinforcement learning library.
dm_control - Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
multi-agent-emergence-environments - Environment generation code for the paper "Emergent Tool Use From Multi-Agent Autocurricula"
tensorforce - Tensorforce: a TensorFlow library for applied reinforcement learning
TensorFlow2.0-for-Deep-Reinforcement-Learning - TensorFlow 2.0 for Deep Reinforcement Learning. :octopus:
PCGrad - Code for "Gradient Surgery for Multi-Task Learning"
DI-engine - OpenDILab Decision AI Engine
Reinforcement-Learning - Learn Deep Reinforcement Learning in 60 days! Lectures & Code in Python. Reinforcement Learning + Deep Learning
pomdp-baselines - Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022