AIX360
DiCE
AIX360 | DiCE | |
---|---|---|
2 | 2 | |
1,533 | 1,270 | |
2.0% | 0.9% | |
8.2 | 8.2 | |
about 2 months ago | 16 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AIX360
- [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process?
-
[R] Explaining the Explainable AI: A 2-Stage Approach - Link to a free online lecture by the author in comments
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques https://arxiv.org/abs/1909.03012 https://github.com/Trusted-AI/AIX360
DiCE
-
[D] Have researchers given up on traditional machine learning methods?
- all domains requiring high interpretability absolutely ignore deep learning at all, and put all their research into traditional ML; see e.g. counterfactual examples, important interpretability methods in finance, or rule-based learning, important in medical or law applications
-
[R] The Shapley Value in Machine Learning
Counter-factual and recourse-based explanations are alternative approach to model explanations. I used to work in a large financial institution, and we were researching whether counter-factual explanation methods would lead to better reason codes for adverse action notices.
What are some alternatives?
AIF360 - A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
OmniXAI - OmniXAI: A Library for eXplainable AI
explainable-cnn - 📦 PyTorch based visualization package for generating layer-wise explanations for CNNs.
CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
cleverhans - An adversarial example library for constructing attacks, building defenses, and benchmarking both
interpret - Fit interpretable models. Explain blackbox machine learning.
awesome-shapley-value - Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)
harakiri - Help applications kill themselves
backpack - BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.
stranger - Chat anonymously with a randomly chosen stranger
DALEX - moDel Agnostic Language for Exploration and eXplanation
shapley - The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).