rliable
cleverhans
rliable | cleverhans | |
---|---|---|
15 | 3 | |
699 | 6,079 | |
1.7% | 1.2% | |
2.5 | 0.0 | |
about 1 month ago | 24 days ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rliable
-
[D] What is standard practice in RL when reporting average returns across multiple seeds in a table or a plot?
You can also look up https://github.com/google-research/rliable https://arxiv.org/pdf/2108.13264.pdf (Neurips 21 outstanding paper) IMHO the field would benefit if its moves in that direction.
-
What is the next booming topic in Deep RL?
You might like the best paper at NeurIPS last year: https://agarwl.github.io/rliable/
-
"Human-level Atari 200x faster", DeepMind 2022 (200x reduction in dataset scale required by Agent57 for human performance)
Deep RL at the Edge of the Statistical Precipice https://agarwl.github.io/rliable/
- How Hugging Face đŸ¤— can contribute to the Deep Reinforcement Learning Ecosystem?
-
Deep RL at the Edge of Statistical Precipice (NeurIPS Outstanding Paper)
You can find the paper, slides and poster at agarwl.github.io/rliable. The OP already put the poster here.
-
Google Highlights How Statistical Uncertainty Of Outcomes Must Be Considered To Evaluate Deep RL Reliably and Propose A Python Library Called ‘RLiable’
A recent Google study highlights how statistical uncertainty of outcomes must be considered for deep RL evaluation to be reliable, especially when only a few training runs are used. Google has also released an easy-to-use Python library called RLiable to help researchers incorporate these tools.
-
[R] Rliable: Better Evaluation for Reinforcement Learning—A Visual Explanation
Website: https://agarwl.github.io/rliable/
-
Towards creating better reward functions in a custom environment| Sensitivity analysis [Question]
First off, "performance" is highly speculative. So make sure you nail down what you mean and ensure reliability of those measurements. Check out https://github.com/google-research/rliable.
- Deep Reinforcement Learning at the Edge of the Statistical Precipice
-
Best RL papers from the past year or two?
Our NeurIPS'21 oral on Deep RL at the Edge of the Statistical Precipice would make for a fun read :)
cleverhans
-
Clever Hans (Intelligence Misatributon)
I only knew of this story from looking up the name of this library on adversarial DL https://github.com/cleverhans-lab/cleverhans
- [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process?
-
[D] Does anyone care about adversarial attacks anymore?
I feel as though this area has not received much attention over the last couple of years. The CleverHans project has gone stale and I haven't heard of many new results recently. Has the community lost interest in this area? Did we decide that adversarial attacks aren't such a problem in practical applications?
What are some alternatives?
CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
deepchecks - Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.
iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.
advertorch - A Toolbox for Adversarial Robustness Research
AIX360 - Interpretability and explainability of data and machine learning models
aws-security-workshops - A collection of the latest AWS Security workshops
uncertainty-toolbox - Uncertainty Toolbox: a Python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization
TorchDrift - Drift Detection for your PyTorch Models
delve - PyTorch model training and layer saturation monitor
pytea - PyTea: PyTorch Tensor shape error analyzer
backpack - BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.
captum - Model interpretability and understanding for PyTorch