f-IRL
Inverse Reinforcement Learning via State Marginal Matching, CoRL 2020 (by twni2016)
pytorch-a2c-ppo-acktr-gail
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL). (by ikostrikov)
f-IRL | pytorch-a2c-ppo-acktr-gail | |
---|---|---|
2 | 3 | |
35 | 3,423 | |
- | - | |
1.8 | 0.0 | |
10 months ago | almost 2 years ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
f-IRL
Posts with mentions or reviews of f-IRL.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-09-12.
-
Can you use the reward learned in generative adversarial imitation learning in order to train from scratch?
Code for https://arxiv.org/abs/2011.04709 found: https://github.com/twni2016/f-IRL
-
How to pretrain a model on expert data?
Try using an imitation learning algorithm. Two popular options are MaxEnt IRL and GAIL. This repository has GAIL implementation and this repository has MaxEnt IRL and GAIL implementation. There are other implementations too that you can check out.
pytorch-a2c-ppo-acktr-gail
Posts with mentions or reviews of pytorch-a2c-ppo-acktr-gail.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-09-12.
-
How does advantage estimation is done when episodes are of variable length in PPO?
As an example look at "compute_returns" function here (and pay attention to how self.masks is used): https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/blob/master/a2c_ppo_acktr/storage.py
-
How to pretrain a model on expert data?
Try using an imitation learning algorithm. Two popular options are MaxEnt IRL and GAIL. This repository has GAIL implementation and this repository has MaxEnt IRL and GAIL implementation. There are other implementations too that you can check out.
-
Trying to Train PPO Agent for Pendulum-v0 from Pixel Inputs
For the PPO, I used this repo, which includes most tricks including GAE, normalized rewards, etc. I have verified this repo works for the traditional Pendulum-v0 task and Atari games (Pong and Breakout).