chainerrl
DeepRL-TensorFlow2
Our great sponsors
chainerrl | DeepRL-TensorFlow2 | |
---|---|---|
3 | 2 | |
1,141 | 573 | |
0.0% | - | |
0.0 | 0.0 | |
over 2 years ago | almost 2 years ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chainerrl
-
Help with my PyTorch implementation of PPO
Code for https://arxiv.org/abs/1709.06560 found: https://github.com/chainer/chainerrl
-
Any working Acer implementation for continuous action space?
I implemented my version of Acer that supports discrete action space. I need to add an extension that supports continuous action space. I've seen a couple of implementations here and here. The first doesn't work for PongNoFrameskip-v4 and the other doesn't work in macOS.
-
Beginner attempting to implement Noisy DQN
I tried all the versions I found and in most of them the network couldn't even learn to set the sigma as 0 (or close). The only implementation where I actually got improvement was by changing the noise directly when calling the noisy layers in this git. I don't know if this is the correct way but it sure showed good results.
DeepRL-TensorFlow2
-
PPO implementation in TensorFlow2
I've been searching for a clean, good, and understandable implementation of PPO for continuous action space with TF2 witch is understandable enough for me to apply my modifications, but the closest thing that I have found is this code which seems to not work properly even on a simple gym cartpole env (discussed issues in git-hub repo suggest the same problem) so I have some doubts :). I was wondering whether you could recommend an implementation that you trust and suggest :)
-
Question about using tf.stop_gradient in separate Actor-Critic networks for A2C implementation for TF2
I have been looking at this implementation of A2C. Here the author of the code uses stop_gradient only on the critic network at L90 bur not in the actor network L61 for the continuous case. However , it is used both in actor and critic networks for the discrete case. Can someone explain me why?
What are some alternatives?
TensorLayer - Deep Learning and Reinforcement Learning Library for Scientists and Engineers
soft-actor-critic - Re-implementation of Soft-Actor-Critic (SAC) in TensorFlow 2.0
machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
tensorforce - Tensorforce: a TensorFlow library for applied reinforcement learning
deep-q-learning - Minimal Deep Q Learning (DQN & DDQN) implementations in Keras
TensorFlow2.0-for-Deep-Reinforcement-Learning - TensorFlow 2.0 for Deep Reinforcement Learning. :octopus:
DeepLearning - Contains all my works, references for deep learning
ydata-synthetic - Synthetic data generators for tabular and time-series data
minimalRL - Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
acer - PyTorch implementation of both discrete and continuous ACER