AgileRL
cleanrl
AgileRL | cleanrl | |
---|---|---|
12 | 41 | |
501 | 4,564 | |
4.2% | - | |
9.8 | 6.3 | |
5 days ago | 27 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AgileRL
- [P] Introducing PPO and Rainbow DQN to our super fast evolutionary HPO reinforcement learning framework
- Introducing PPO and Rainbow DQN to our super fast evolutionary HPO reinforcement learning framework
-
[P] Significant improvements for multi-agent reinforcement learning!
Please check it out! https://github.com/AgileRL/AgileRL
- 10x faster reinforcement learning hyperparameter optimization than SOTA - now with distributed training!
- [P] 10x faster reinforcement learning hyperparameter optimization than SOTA - now with distributed training!
-
(1/2) May 2023
Deep Reinforcement Learning library focused on improving development by introducing RLOps - MLOps for reinforcement learning (https://github.com/AgileRL/AgileRL)
-
[P] 10x faster reinforcement learning HPO - now for RLHF!
https://github.com/AgileRL/AgileRL/blob/main/CONTRIBUTING.md Has a link to our discord too
- 10x faster reinforcement learning HPO - now with CNNs!
- [P] 10x faster reinforcement learning HPO - now with CNNs!
-
[P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up
GitHub: https://github.com/AgileRL/AgileRL
cleanrl
-
[P] PettingZoo 1.24.0 has been released (including Stable-Baselines3 tutorials)
PettingZoo 1.24.0 is now live! This release includes Python 3.11 support, updated Chess and Hanabi environment versions, and many bugfixes, documentation updates and testing expansions. We are also very excited to announce 3 tutorials using Stable-Baselines3, and a full training script using CleanRL with TensorBoard and WandB.
-
PPO agent for "2048": help requested
Here's where the problem starts: after implementing a custom environment that follows the typical gymnasium interface, and use a slightly adjusted PPO implementation from CleanRL, I cannot get the agent to learn anything at all, even though this specific implementation seems to work just fine on basic gymnasium examples. I am hoping the RL community here can help me with some useful pointers.
- [P] 10x faster reinforcement learning hyperparameter optimization than SOTA - now with distributed training!
-
PPO ignores high rewards in deterministic sytem
Try out a standard implementation with some standard parameters from here: https://github.com/vwxyzjn/cleanrl/tree/master/cleanrl
-
SB3 - NotImplementedError: Box([-1. -1. -8.], [1. 1. 8.], (3,), <class 'numpy.float32'>) observation space is not supported
I am trying to run cleanrl on the `Pendulum-v1` environment. I did that by going here and changing the default `env-id` to ` parser.add_argument("--env-id", type=str, default="Pendulum-v1",
- Cartpole and mountain car
-
cleanrl gym issues
git clone https://github.com/vwxyzjn/cleanrl.git && cd cleanrl poetry install
-
Why is my Soft Actor Critic Algorithm not learning?
Can someone please help me debug my implementation of SAC. Please let me know if you have any questions. I tried comparing my work with CleanRL and caught a couple of errors. However, my implementation does diverge a lot from theirs as I wanted to test my understanding.
-
Model-based hierarchical reinforcement learning
Shameless self-plug: as far as implementation is concerned, I am working on a (hopefully) easier to understand Dreamer architecture under the CleanRL library, toward also re-implementing Director, Dreamer-v3, and and JAX variant for faster training.
-
[P] Robust Policy Optimization is now in CleanRL 🔥!
Happy to share that CleanRL now has a new algorithm called Robust Policy Optimization — 5 lines of code change to PPO to get better performance in 57 out of 61 continuous action envs 🚀 (e.g., dm_control)
What are some alternatives?
chat-ui - Open source codebase powering the HuggingChat app
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
RLeXplore - RLeXplore provides stable baselines of exploration methods in reinforcement learning, such as intrinsic curiosity module (ICM), random network distillation (RND) and rewarding impact-driven exploration (RIDE).
tianshou - An elegant PyTorch deep reinforcement learning library.
loopquest - A Production Tool for Embodied AI
d3rlpy - An offline deep reinforcement learning library
de-torch - Minimal PyTorch Library for Differential Evolution
reinforcement-learning-discord-wiki - The RL discord wiki
Muzero - Pytorch Implementation of MuZero for gym environment. It support any Discrete , Box and Box2D configuration for the action space and observation space.
mbrl-lib - Library for Model Based RL
q-learning-algorithms - This repository will aim to provide implementations of q-learning algorithms (DQN, Double-DQN, ...) using Pytorch.
machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...