muzero-general
seed_rl
Our great sponsors
muzero-general | seed_rl | |
---|---|---|
14 | 8 | |
2,372 | 760 | |
- | - | |
0.0 | 0.0 | |
3 months ago | over 1 year ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
muzero-general
-
Open source rules engine for Magic: The Gathering
I went looking for MuZero implementations in order to see how, exactly, they interact with the game space. Based on this one, which had the most stars in the muzero topic, it appears that it needs to be able to discern legal next steps from the current game state https://github.com/werner-duvaud/muzero-general/blob/master/...
So, I guess for the cards Forge has implemented one could MuZero it, but I believe it's a bit chicken and egg with a "free text" game like M:TG -- in order to train one would need to know legal steps for any random game state, but in order to have legal steps one would need to be able to read and interpret English rules and card text
- I placed Stockfish (white) against ChatGPT (black). Here's how the game went.
- Ask HN: What interesting problems are you working on? ( 2022 Edition)
-
How to "fit" the output of the Critic to the dimension of the reward?
You may want to use the trick described in https://arxiv.org/pdf/1805.11593.pdf as a Transformed Bellman Operator. Its efficiency is proved in MuZero original paper https://arxiv.org/pdf/1911.08265.pdf Appendix F. The implementation of that method you can find here: https://github.com/werner-duvaud/muzero-general Usage: muzero/models.py:649 (def support_to_scalar)
-
MuZero unable to solve non-slippery FrozenLake environment?
I have used this implementation from MuZero: https://github.com/werner-duvaud/muzero-general
-
RL for chess
+1 to taking a look at OpenSpiel. It has AlphaZero in C++ and Python, and there is even a PR open that allows running UCI (e.g. Stockfish) bot. You can also load chess via the OpenSpiel wrapper in muzero-general: https://github.com/werner-duvaud/muzero-general
-
The future of MuZero, and where to go for news
When I looked up some community implementations, like that of Werner Duvaud on GitHub and Discord, hoping to make my own contributions to this effect, I soon found that I was hopelessly out of my depth as an amateur programmer, even with the help of some other sources like this walkthrough series. However, from what I could tell, most of the people working on this sort of thing seemed to be tackling relatively simple games. At first I thought this might be largely due to limitations in hobby time or computing power available to these users, but then I also noticed that, unless I have misunderstood something, it seems like the games are required to be rebuilt entirely in the engine of (this implementation of) MuZero, which would also obviously be a limit on the complexity of games chosen.
- Is MuZero currently the best RL algo that we have now?
-
"muzero-general", PyTorch/Ray code for Gym/Atari/board-games (reasonable results + checkpoints for small tasks)
Windows support (Experimental / Workaround: Use the notebook in Google Colab)
-
Muzero code implementation
There are several if you google "muzero github", e.g. https://github.com/werner-duvaud/muzero-general
seed_rl
-
Fast and hackable frameworks for RL research
I'm tired of having my 200m frames of Atari take 5 days to run with dopamine, so I'm looking for another framework to use. I haven't been able to find one that's fast and hackable, preferably distributed or with vectorized environments. Anybody have suggestions? seed-rl seems promising but is archived (and in TF2). sample-factory seems super fast but to the best of my knowledge doesn't work with replay buffers. I've been trying to get acme working but documentation is sparse and many of the features are broken.
-
[Q]Official seed_rl repo is archived.. any alternative seed_rl style drl repo??
Hey guys! I was fascinated by the concept of the seed_rl when it first came out because I believe that it could accelerate the training speed in local single machine environment. But I found that the official repo is recently archived and no longer maintains.. So I’m looking for alternatives which I can use seed_rl type distributed RL. Ray(or Rllib) is the most using drl librarys, but it doesn’t seems like using the seed_rl style. Anyone can recommend distributed RL librarys for it, or good for research and for lot’s of code modification? Is RLLib worth to use in single local machine training despite those cons? Thank you!!
-
V-MPO - what do you think
You may have a look at the implementation from here. https://github.com/google-research/seed_rl
-
Need some help understanding what steps to take to debug a RL agent
For some context, this is an algo trading bot that's trained on intraday time series stock data. I'm using Google Research's SEED RL codebase with vtrace. The model has a sequence length of 240, and 30 features. Each iteration represents training on a batch of 256 samples, and there are 256 environments being sampled from at a time. A reward is applied when the agent closes a position, and the size of the reward is based on how much profit (positive or negative) was made. The agent is forced to close its remaining position at the end of each day, resulting in a larger negative reward than normal if it had a large and unprofitable position.
-
Strange results from training with Google Cloud TPUs, seem to be very inefficient?
I've been doing some tests to find the most efficient configuration for training using Google Cloud AI Platform. The results are here (note that "step" in this case represents a single sample/observation/frame from a single environment; iteration represents running the minimization function on a single batch). The results are a bit strange. I was under the assumption that training with TPUs would be one of the most efficient ways to train, but instead it's the least efficient by a wide margin. I'm using Google Research's SEED RL codebase, so I'm assuming there are no bugs in my code.
-
Strange training results: why is a batch size of 1 more efficient than larger batch sizes, despite using a GPU/TPU?
I'm currently doing some tests in preparation for my first real bit of training. I'm using Google Cloud AI Platform to train, and am trying to find the optimal machine setup. It's a work in progress, but here's a table I'm putting together to get a sense of the efficiency of each setup. On the left you'll see the accelerator type, ordered from least to most expensive. Here you'll also find the number of accelerator's used, the cost per hour, and the batch size. To the right are the average time it took to complete an entire training iteration and how long it took to complete the minimization step. You'll notice that the values are almost identical for each setup; I'm using Google Research's SEED RL, so I thought to record both values since I'm not sure exactly everything that happens between iterations. Turns out it's not much. There's also a calculation of the the time it takes to complete a single "step" (aka, a single observation from a single environment), as well as the average cost per step.
-
Having trouble passing custom flags with AI Platform
I'm trying to get Google Research's SEED project working with some tweaks specific to my use case. One of the changes is that I need to pass more custom flags than they do in the samples they provide in their setup.sh file (ie, environment , agent, actors_per_worker, etc). I've added flags.DEFINE_integer/float/string/etc calls to the project files for my custom flags, but it's throwing the following error: FATAL Flags parsing error: Unknown command line flag 'num_actors_with_summaries'. This error is not being thrown for the custom flags they pass, only the ones I've added. For the life of me I can't figure out what it is they're doing differently than me.
-
New to Linux, trying to understand why a variable isn't getting assigned in an .sh file
I'm trying to get a the SEED project by Google Research working. This is my first time doing anything with Linux, so I'm a bit lost in understanding why a specific line isn't working. The line in question is line 21 of this file. Line 22 outputs the following error: /../docker/push.sh: No such file or directory exists. I added a printf after line 21 as follows: printf "test: %s\n" $DIR. It outputs the following: test: .
What are some alternatives?
deep-RL-trading - playing idealized trading games with deep reinforcement learning
tianshou - An elegant PyTorch deep reinforcement learning library.
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
rl-baselines-zoo - A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.
alpha-zero-general - A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more
Apache Impala - Apache Impala
open_spiel - OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
stable-baselines3-contrib - Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code
pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
pytorch-ddpg - Deep deterministic policy gradient (DDPG) in PyTorch 🚀
DI-engine - OpenDILab Decision AI Engine