dreamerv2
stable-baselines3-contrib
Our great sponsors
dreamerv2 | stable-baselines3-contrib | |
---|---|---|
4 | 6 | |
853 | 427 | |
- | 8.0% | |
0.0 | 6.7 | |
over 1 year ago | 27 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dreamerv2
-
Sources of Actor Gradients
In fact, they found that just reinforce gradients work in DM control now too: Dreamerv2 GitHub (they just needed to turn off gradients through the action path - which I guess was being passed back with straight-through estimation? I'm actually having a difficult time telling how the gradient is different on the action vs policy.log_prob(action)).
-
PyDreamer: model-based RL written in PyTorch + integrations with DM Lab and MineRL environments
This is my implementation of Hafner et al. DreamerV2 algorithm. I found the PlaNet/Dreamer/DreamerV2 paper series to be some of the coolest RL research in recent years, showing convincingly that MBRL (model-based RL) does work and is competitive with model-free algorithms. And we all know that AGI will be model-based, right? :)
-
Any current state or the art libraries for training agents to play atari games?
Last I checked, for running off a single node, the state of the art was Dreamerv2 https://github.com/danijar/dreamerv2
- Google AI, DeepMind And The University of Toronto Introduce DreamerV2, The First Reinforcement Learning (RL) Agent That Outperforms Humans on The Atari Benchmark
stable-baselines3-contrib
-
Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
# https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/blob/master/sb3_contrib/tqc/tqc.py :
-
Understanding Action Masking in RLlib
Here's a theoretical overview and an implementation of action masking for PPO.
-
PPO rollout buffer for turn-based two-player game with varying turn lengths
Simplified version of rollout collection (adapted from ppo_mask.py line 282):
-
GitHub Copilot: your AI pair programmer
Transformers (GPT-3) aren't quite _supervised_, but it does require valid samples.
Agree 100% with RL being the path forward. You probably have already seen ( https://venturebeat.com/2021/06/09/deepmind-says-reinforceme... ). Personally I'm really stoked for this https://github.com/Stable-Baselines-Team/stable-baselines3-c... , which will make it a lot easier for rubes like me to use RL.
-
[P] Stable-Baselines3 v1.0 - Reliable implementations of RL algorithms
But as we already have vanilla DQN and QR-DQN (in our contrib repo: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ) I think it is already a good start for off-policy discrete action algorithms. (QR-DQN is usually competitive vs DQN+extensions)
What are some alternatives?
dreamerv3 - Mastering Diverse Domains through World Models
muzero-general - MuZero
dreamer - Dream to Control: Learning Behaviors by Latent Imagination
TabNine - AI Code Completions
panda-gym - Set of robotic environments based on PyBullet physics engine and gymnasium.
stable-baselines3-c
dm_control - Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
copilot-cli - The AWS Copilot CLI is a tool for developers to build, release and operate production ready containerized applications on AWS App Runner or Amazon ECS on AWS Fargate.
planet - Learning Latent Dynamics for Planning from Pixels
rl-baselines3-zoo - A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
orion - Asynchronous Distributed Hyperparameter Optimization.
robot-gym - RL applied to robotics.