sample-factory
stable-baselines3
sample-factory | stable-baselines3 | |
---|---|---|
6 | 46 | |
743 | 7,953 | |
- | 3.1% | |
7.9 | 8.2 | |
5 days ago | 7 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sample-factory
-
A minimal RL library for infinite horizon tasks
I take a lot of inspiration from Sample Factory and RLlib for my own RL library's implementation. Although I thoroughly enjoy both of these libraries, they just didn't quite fit right with my use case which motivated me to start my own. Hopefully someone finds use in rlstack whether it be through direct usage or as inspiration for their own personalized library
-
Fast and hackable frameworks for RL research
I'm tired of having my 200m frames of Atari take 5 days to run with dopamine, so I'm looking for another framework to use. I haven't been able to find one that's fast and hackable, preferably distributed or with vectorized environments. Anybody have suggestions? seed-rl seems promising but is archived (and in TF2). sample-factory seems super fast but to the best of my knowledge doesn't work with replay buffers. I've been trying to get acme working but documentation is sparse and many of the features are broken.
-
Multi-agent Decentralized Training with a PettingZoo environment
Hi, try sample-factory
-
How is IMPALA as a framework?
Sample Factory: https://github.com/alex-petrenko/sample-factory
-
The Myth of a Superhuman AI
Everything in this reply is wrong.
In AlphaZero for example, there were 44 million training games total for 700,00x0 steps of training for the full 9 hours.
Turning that human-like numbers, 44million games with on average 60 moves, at 1 second thinking time per move,
> 44000000*60/60/60/24/365 = 83,7138508371 years of training experience in 9 hours
The whole field of Reinforcement learning has agents training and playing games for many orders of magnitude more time than a human ever will. In-fact, we can scale this to over 100k of actions per second, in a single machine:
https://github.com/alex-petrenko/sample-factory
Then, there is also distributed Reinforcement Learning, where hundreds of agents can play at different machines and share experience, see AlphaZero, LeelaZero, R2D2 agent, R2D3 agent, Apex, Acer, Asynchronous PPO.
> but the data isn't useful without the context of experience
The experience is the data in Reinforcement Learning.
> and all processing power can do it overfit model without experience.
That is wrong, the agents perform what is called exploration to avoid getting stuck in simple strategies.
> Even if we put AI into an army of robots running around and experiencing things, there are still scaling limits to encoding and communicating knowledge and understanding.
True, but machines scale better because they speak the same language, or they can learn to tune their language to get their message across.
> Human organizations are a great example of the scaling limits of intelligence.
Human organization is a testament to how far we can get with something as limiting as the commonly used language. The language that we use to communicate is subject to misinterpretation due to our subjective experiences, this limitation is not shared by machines.
-
Best PyTorch RL library for doing research
I borrow a lot of performance tricks from sample factory, which is awesome but hard to modify from its original APPO algorithm. rlpyt was more modular, and I borrowed more ideas from it (namedarraytuple), but still too limited.
stable-baselines3
-
Sim-to-real RL pipeline for open-source wheeled bipeds
The latest release (v3.0.0) of Upkie's software brings a functional sim-to-real reinforcement learning pipeline based on Stable Baselines3, with standard sim-to-real tricks. The pipeline trains on the Gymnasium environments distributed in upkie.envs (setup: pip install upkie) and is implemented in the PPO balancer. Here is a policy running on an Upkie:
-
[P] PettingZoo 1.24.0 has been released (including Stable-Baselines3 tutorials)
PettingZoo 1.24.0 is now live! This release includes Python 3.11 support, updated Chess and Hanabi environment versions, and many bugfixes, documentation updates and testing expansions. We are also very excited to announce 3 tutorials using Stable-Baselines3, and a full training script using CleanRL with TensorBoard and WandB.
-
[Question] Why there is so few algorithms implemented in SB3?
I am wondering why there is so few algorithms in Stable Baselines 3 (SB3, https://github.com/DLR-RM/stable-baselines3/tree/master)? I was expecting some algorithms like ICM, HIRO, DIAYN, ... Why there is no model-based, skill-chaining, hierarchical-RL, ... algorithms implemented there?
-
Stable baselines! Where my people at?
Discord is more focused, and they have a page for people who wants to contribute https://github.com/DLR-RM/stable-baselines3/blob/master/CONTRIBUTING.md
-
SB3 - NotImplementedError: Box([-1. -1. -8.], [1. 1. 8.], (3,), <class 'numpy.float32'>) observation space is not supported
Therefore, I debugged this error to the ReplayBuffer that was imported from `SB3`. This is the problem function -
- Exporting an A2C model created with stable-baselines3 to PyTorch
-
Shimmy 1.0: Gymnasium & PettingZoo bindings for popular external RL environments
Have you ever wanted to use dm-control with stable-baselines3? Within Reinforcement learning (RL), a number of APIs are used to implement environments, with limited ability to convert between them. This makes training agents across different APIs highly difficult, and has resulted in a fractured ecosystem.
-
Stable-Baselines3 v1.8 Release
Changelog: https://github.com/DLR-RM/stable-baselines3/releases/tag/v1.8.0
-
[P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up
Great project! One question though, is there any reason why you are not using existing RL models instead of creating your own, such as stable baselines?
- Is stable-baselines3 compatible with gymnasium/gymnasium-robotics?
What are some alternatives?
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
tianshou - An elegant PyTorch deep reinforcement learning library.
stable-baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
rl8 - A high throughput, end-to-end RL library for infinite horizon tasks.
rlpyt - Reinforcement Learning in PyTorch
torchbeast - A PyTorch Platform for Distributed RL
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros