DiCE
OmniXAI
Our great sponsors
DiCE | OmniXAI | |
---|---|---|
2 | 1 | |
1,270 | 805 | |
2.4% | 3.2% | |
8.2 | 5.2 | |
11 days ago | 8 months ago | |
Python | Jupyter Notebook | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DiCE
-
[D] Have researchers given up on traditional machine learning methods?
- all domains requiring high interpretability absolutely ignore deep learning at all, and put all their research into traditional ML; see e.g. counterfactual examples, important interpretability methods in finance, or rule-based learning, important in medical or law applications
-
[R] The Shapley Value in Machine Learning
Counter-factual and recourse-based explanations are alternative approach to model explanations. I used to work in a large financial institution, and we were researching whether counter-factual explanation methods would lead to better reason codes for adverse action notices.
OmniXAI
-
Salesforce AI Open-Sources ‘OmniXAI’: A Python-based Machine Learning Library That Provides One-Stop Explainable AI (XAI) Solution To analyze, Debug, And Interprets AI Models
Continue reading | Checkout the paper, article, github, dashboard
What are some alternatives?
CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
interpret - Fit interpretable models. Explain blackbox machine learning.
AIX360 - Interpretability and explainability of data and machine learning models
shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
SegGradCAM - SEG-GRAD-CAM: Interpretable Semantic Segmentation via Gradient-Weighted Class Activation Mapping
harakiri - Help applications kill themselves
DALEX - moDel Agnostic Language for Exploration and eXplanation
stranger - Chat anonymously with a randomly chosen stranger
eli5 - A library for debugging/inspecting machine learning classifiers and explaining their predictions
shapley - The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).
neuro-symbolic-sudoku-solver - ⚙️ Solving sudoku using Deep Reinforcement learning in combination with powerful symbolic representations.