rliable
[NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds. (by google-research)
bsuite
bsuite is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent (by google-deepmind)
rliable | bsuite | |
---|---|---|
15 | 2 | |
699 | 1,464 | |
1.7% | 0.4% | |
2.5 | 0.0 | |
about 1 month ago | 20 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rliable
Posts with mentions or reviews of rliable.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-10-04.
-
[D] What is standard practice in RL when reporting average returns across multiple seeds in a table or a plot?
You can also look up https://github.com/google-research/rliable https://arxiv.org/pdf/2108.13264.pdf (Neurips 21 outstanding paper) IMHO the field would benefit if its moves in that direction.
-
What is the next booming topic in Deep RL?
You might like the best paper at NeurIPS last year: https://agarwl.github.io/rliable/
-
"Human-level Atari 200x faster", DeepMind 2022 (200x reduction in dataset scale required by Agent57 for human performance)
Deep RL at the Edge of the Statistical Precipice https://agarwl.github.io/rliable/
- How Hugging Face đŸ¤— can contribute to the Deep Reinforcement Learning Ecosystem?
-
Deep RL at the Edge of Statistical Precipice (NeurIPS Outstanding Paper)
You can find the paper, slides and poster at agarwl.github.io/rliable. The OP already put the poster here.
-
Google Highlights How Statistical Uncertainty Of Outcomes Must Be Considered To Evaluate Deep RL Reliably and Propose A Python Library Called ‘RLiable’
A recent Google study highlights how statistical uncertainty of outcomes must be considered for deep RL evaluation to be reliable, especially when only a few training runs are used. Google has also released an easy-to-use Python library called RLiable to help researchers incorporate these tools.
-
[R] Rliable: Better Evaluation for Reinforcement Learning—A Visual Explanation
Website: https://agarwl.github.io/rliable/
-
Towards creating better reward functions in a custom environment| Sensitivity analysis [Question]
First off, "performance" is highly speculative. So make sure you nail down what you mean and ensure reliability of those measurements. Check out https://github.com/google-research/rliable.
- Deep Reinforcement Learning at the Edge of the Statistical Precipice
-
Best RL papers from the past year or two?
Our NeurIPS'21 oral on Deep RL at the Edge of the Statistical Precipice would make for a fun read :)
bsuite
Posts with mentions or reviews of bsuite.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-04-25.
-
Shimmy 1.0: Gymnasium & PettingZoo bindings for popular external RL environments
This includes single-agent Gymnasium wrappers for DM Control, DM Lab, Behavior Suite, Arcade Learning Environment, OpenAI Gym V21 & V26. Multi-agent PettingZoo wrappers support DM Control Soccer, OpenSpiel and Melting Pot. For more information, read the release notes here:
-
Towards creating better reward functions in a custom environment| Sensitivity analysis [Question]
Second, if you're interested in evaluating the robustness of the algo implementation, then projects like https://github.com/deepmind/bsuite might help highlight issues, although they may not be relevent to your problem.
What are some alternatives?
When comparing rliable and bsuite you can also consider the following projects:
CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
lab - A customisable 3D platform for agent-based AI research
iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.
Shimmy - An API conversion tool for popular external reinforcement learning environments
cleverhans - An adversarial example library for constructing attacks, building defenses, and benchmarking both
dm_control - Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
meltingpot - A suite of test scenarios for multi-agent reinforcement learning.