HandyRL
HandyRL is a handy and simple framework based on Python and PyTorch for distributed reinforcement learning that is applicable to your own environments. (by DeNA)
PPO-PyTorch
Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch (by nikhilbarhate99)
HandyRL | PPO-PyTorch | |
---|---|---|
1 | 2 | |
282 | 1,483 | |
0.0% | - | |
4.3 | 2.8 | |
12 days ago | 5 months ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
HandyRL
Posts with mentions or reviews of HandyRL.
We have used some of these posts to build our list of alternatives
and similar projects.
PPO-PyTorch
Posts with mentions or reviews of PPO-PyTorch.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-12-19.
-
Where does the loss function for Policy Gradient come from?
It's just very convient implementation wise, in just a few lines you can get the "loss": (from https://github.com/nikhilbarhate99/PPO-PyTorch/blob/master/PPO.py)
-
A2C/PPO with continuous action space
In some methods, like the one here, the actor network has two heads, one for the mean and one for the variance. In other methods, like the one here, the network only outputs the mean, while the variance is pre-defined and is decaying throughout the training.
What are some alternatives?
When comparing HandyRL and PPO-PyTorch you can also consider the following projects:
adaptdl - Resource-adaptive cluster scheduler for deep learning training.
l2rpn-baselines - L2RPN Baselines a repository to host baselines for l2rpn competitions.