mini-AlphaStar
(JAIR'2022) A mini-scale reproduction code of the AlphaStar program. Note: the original AlphaStar is the AI proposed by DeepMind to play StarCraft II. JAIR = Journal of Artificial Intelligence Research. (by liuruoze)
PPO-PyTorch
Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch (by nikhilbarhate99)
Our great sponsors
mini-AlphaStar | PPO-PyTorch | |
---|---|---|
1 | 2 | |
278 | 1,453 | |
- | - | |
0.0 | 2.8 | |
over 1 year ago | 5 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mini-AlphaStar
Posts with mentions or reviews of mini-AlphaStar.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Better AI opponent
With quick googling I found this, but it seemed like there were no pretrained models and without a tech background this will be pretty much impossible to run.
PPO-PyTorch
Posts with mentions or reviews of PPO-PyTorch.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-12-19.
-
Where does the loss function for Policy Gradient come from?
It's just very convient implementation wise, in just a few lines you can get the "loss": (from https://github.com/nikhilbarhate99/PPO-PyTorch/blob/master/PPO.py)
-
A2C/PPO with continuous action space
In some methods, like the one here, the actor network has two heads, one for the mean and one for the variance. In other methods, like the one here, the network only outputs the mean, while the variance is pre-defined and is decaying throughout the training.