fundamentalRL
educational codebase demonstrating some of the most common RL algorithms (by mpgussert)
chainerrl
ChainerRL is a deep reinforcement learning library built on top of Chainer. (by chainer)
fundamentalRL | chainerrl | |
---|---|---|
1 | 3 | |
3 | 1,141 | |
- | 0.0% | |
0.0 | 0.0 | |
over 2 years ago | over 2 years ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fundamentalRL
Posts with mentions or reviews of fundamentalRL.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-01-06.
chainerrl
Posts with mentions or reviews of chainerrl.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-01-06.
-
Help with my PyTorch implementation of PPO
Code for https://arxiv.org/abs/1709.06560 found: https://github.com/chainer/chainerrl
-
Any working Acer implementation for continuous action space?
I implemented my version of Acer that supports discrete action space. I need to add an extension that supports continuous action space. I've seen a couple of implementations here and here. The first doesn't work for PongNoFrameskip-v4 and the other doesn't work in macOS.
-
Beginner attempting to implement Noisy DQN
I tried all the versions I found and in most of them the network couldn't even learn to set the sigma as 0 (or close). The only implementation where I actually got improvement was by changing the noise directly when calling the noisy layers in this git. I don't know if this is the correct way but it sure showed good results.
What are some alternatives?
When comparing fundamentalRL and chainerrl you can also consider the following projects:
DeepLearning - Contains all my works, references for deep learning
TensorLayer - Deep Learning and Reinforcement Learning Library for Scientists and Engineers
machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
DeepRL-TensorFlow2 - 🐋 Simple implementations of various popular Deep Reinforcement Learning algorithms using TensorFlow2
deep-q-learning - Minimal Deep Q Learning (DQN & DDQN) implementations in Keras
TensorFlow2.0-for-Deep-Reinforcement-Learning - TensorFlow 2.0 for Deep Reinforcement Learning. :octopus:
acer - PyTorch implementation of both discrete and continuous ACER