The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →
Rliable Alternatives
Similar projects and alternatives to rliable based on common topics and language
-
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms (by carla-recourse)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
bsuite
bsuite is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent
-
cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
-
dopamine
Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.
rliable reviews and mentions
-
[D] What is standard practice in RL when reporting average returns across multiple seeds in a table or a plot?
You can also look up https://github.com/google-research/rliable https://arxiv.org/pdf/2108.13264.pdf (Neurips 21 outstanding paper) IMHO the field would benefit if its moves in that direction.
-
What is the next booming topic in Deep RL?
You might like the best paper at NeurIPS last year: https://agarwl.github.io/rliable/
-
"Human-level Atari 200x faster", DeepMind 2022 (200x reduction in dataset scale required by Agent57 for human performance)
Deep RL at the Edge of the Statistical Precipice https://agarwl.github.io/rliable/
- How Hugging Face đŸ¤— can contribute to the Deep Reinforcement Learning Ecosystem?
-
Deep RL at the Edge of Statistical Precipice (NeurIPS Outstanding Paper)
You can find the paper, slides and poster at agarwl.github.io/rliable. The OP already put the poster here.
-
Google Highlights How Statistical Uncertainty Of Outcomes Must Be Considered To Evaluate Deep RL Reliably and Propose A Python Library Called ‘RLiable’
A recent Google study highlights how statistical uncertainty of outcomes must be considered for deep RL evaluation to be reliable, especially when only a few training runs are used. Google has also released an easy-to-use Python library called RLiable to help researchers incorporate these tools.
-
[R] Rliable: Better Evaluation for Reinforcement Learning—A Visual Explanation
Website: https://agarwl.github.io/rliable/
-
Towards creating better reward functions in a custom environment| Sensitivity analysis [Question]
First off, "performance" is highly speculative. So make sure you nail down what you mean and ensure reliability of those measurements. Check out https://github.com/google-research/rliable.
- Deep Reinforcement Learning at the Edge of the Statistical Precipice
-
Best RL papers from the past year or two?
Our NeurIPS'21 oral on Deep RL at the Edge of the Statistical Precipice would make for a fun read :)
-
A note from our sponsor - WorkOS
workos.com | 23 Apr 2024
Stats
google-research/rliable is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of rliable is Jupyter Notebook.
Popular Comparisons
Sponsored