dreamerv2
pytorch-a2c-ppo-acktr-gail
Our great sponsors
dreamerv2 | pytorch-a2c-ppo-acktr-gail | |
---|---|---|
4 | 3 | |
853 | 3,423 | |
- | - | |
0.0 | 0.0 | |
about 1 year ago | almost 2 years ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dreamerv2
-
Sources of Actor Gradients
In fact, they found that just reinforce gradients work in DM control now too: Dreamerv2 GitHub (they just needed to turn off gradients through the action path - which I guess was being passed back with straight-through estimation? I'm actually having a difficult time telling how the gradient is different on the action vs policy.log_prob(action)).
-
PyDreamer: model-based RL written in PyTorch + integrations with DM Lab and MineRL environments
This is my implementation of Hafner et al. DreamerV2 algorithm. I found the PlaNet/Dreamer/DreamerV2 paper series to be some of the coolest RL research in recent years, showing convincingly that MBRL (model-based RL) does work and is competitive with model-free algorithms. And we all know that AGI will be model-based, right? :)
-
Any current state or the art libraries for training agents to play atari games?
Last I checked, for running off a single node, the state of the art was Dreamerv2 https://github.com/danijar/dreamerv2
- Google AI, DeepMind And The University of Toronto Introduce DreamerV2, The First Reinforcement Learning (RL) Agent That Outperforms Humans on The Atari Benchmark
pytorch-a2c-ppo-acktr-gail
-
How does advantage estimation is done when episodes are of variable length in PPO?
As an example look at "compute_returns" function here (and pay attention to how self.masks is used): https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/blob/master/a2c_ppo_acktr/storage.py
-
How to pretrain a model on expert data?
Try using an imitation learning algorithm. Two popular options are MaxEnt IRL and GAIL. This repository has GAIL implementation and this repository has MaxEnt IRL and GAIL implementation. There are other implementations too that you can check out.
-
Trying to Train PPO Agent for Pendulum-v0 from Pixel Inputs
For the PPO, I used this repo, which includes most tricks including GAE, normalized rewards, etc. I have verified this repo works for the traditional Pendulum-v0 task and Atari games (Pong and Breakout).
What are some alternatives?
dreamerv3 - Mastering Diverse Domains through World Models
soft-actor-critic - Implementation of the Soft Actor Critic algorithm using Pytorch.
dreamer - Dream to Control: Learning Behaviors by Latent Imagination
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
panda-gym - Set of robotic environments based on PyBullet physics engine and gymnasium.
dm_control - Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
tensorforce - Tensorforce: a TensorFlow library for applied reinforcement learning
stable-baselines3-contrib - Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code
TensorFlow2.0-for-Deep-Reinforcement-Learning - TensorFlow 2.0 for Deep Reinforcement Learning. :octopus:
planet - Learning Latent Dynamics for Planning from Pixels
PCGrad - Code for "Gradient Surgery for Multi-Task Learning"