shap
interpretable-ml-book
Our great sponsors
shap | interpretable-ml-book | |
---|---|---|
38 | 36 | |
21,580 | 4,673 | |
1.8% | - | |
9.4 | 4.7 | |
6 days ago | about 2 months ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
shap
- Shap v0.45.0
-
[D] Convert a ML model into a rule based system
something like GitHub - shap/shap: A game theoretic approach to explain the output of any machine learning model.?
-
[P] tinyshap: A minimal implementation of the SHAP algorithm
A less than 100 lines of code implementation of KernelSHAP because I had a hard time understanding shap's code.
-
Whatโs after model adequacy?
We use tools like SHAP to explain what the model is doing to stakeholders.
- Feature importance with feature engineering?
-
Model interpretation with many features
https://github.com/slundberg/shap this or https://github.com/marcotcr/lime would be relevant to you, especially if you want to look at explaining a single prediction.
-
SHAP Value Interpretation
See this closed topic for more detail: https://github.com/slundberg/shap/issues/29
-
Christoph Molnar on SHAP Library
Dr. Molnar recently had a semi-viral post on LinkedIn and on Twitter, where he essentially highlights the booming popularity [and power] of using SHAP for explainable AI (which I agree with), but that it also comes with problems; i.e., the open source implementation has thousands of pull requests, bugs, and issues and yet there is no permanent or significant funding to go in and fix them.
-
Random Forest Estimation Question
Option 4) create SHAP values https://github.com/slundberg/shap to better understand what the RF did.
-
Model explainability
txtai pipelines are wrappers around Hugging Face pipelines with logic to easily integrate with txtai's workflow framework. Given that, we can use the SHAP library to explain predictions.
interpretable-ml-book
- A Guide to Making Black Box Models Interpretable
-
So much for AI
If you're a student, I'd recommend this book :https://christophm.github.io/interpretable-ml-book/
-
Best way to make a random forest more explainable (need to know which features are driving the prediction)
Pretty much everyone shows SHAP plots now. Definitely the way to go. Check out the Christoph Molnar book. https://christophm.github.io/interpretable-ml-book/
-
Is there another way to determine the effect of the features other than the inbuilt features importance and SHAP values? [Research] [Discussion]
Yes, there are many techniques beyond the two you listed. I suggest doing a survey of techniques (hint: explainable AI or XAI), starting with the following book: Interpretable Machine Learning.
-
Which industry/profession/tasks require an aggregate analysis of data representing different physical objects (And how would you call that?)
Ah, alright. It sounds like you're looking for interpretability so I'd suggest this amazing overview of it by Christoph Molnar. If you choose the right models, or the right way of interpreting those, it can help a ton in communicating not only your results, but also what you did to obtain them.
-
What skills do I need to really work on?
Not necessarily; decision trees, Naive Bayes, etc., are interpretable. I'd refer to Molnar--specifically his Interpretable Machine Learning text--if you are interested in that subject.
-
Random forest vs multiple regression to determine predictor importance.
Consulting something like Interpretable Machine Learning or the documentation of a package like the vip package would also be a really, really good place to start.
-
The Rashomon Effect Explained โ Does Truth Actually Exist? [13.46]
Just read a book called Interpretable Machine Learning which focuses on analyzing ML models and determine which inputs has more impact in the result.
- Interpretable Machine Learning
-
Saw this in my Linkedin feed - what are your thoughts?
Calling it a book on SHAP undersells it. https://christophm.github.io/interpretable-ml-book/
What are some alternatives?
shapash - ๐ Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
stat_rethinking_2022 - Statistical Rethinking course winter 2022
Transformer-Explainability - [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
machine-learning-yearning - Machine Learning Yearning book by ๐ ฐ๏ธ๐ท๐ญ๐ป๐ฎ๐ ๐
captum - Model interpretability and understanding for PyTorch
jina - โ๏ธ Build multimodal AI applications with cloud-native stack
lime - Lime: Explaining the predictions of any machine learning classifier
neural_regression_discontinuity - In this repository, I modify a quasi-experimental statistical procedure for time-series inference using convolutional long short-term memory networks.
interpret - Fit interpretable models. Explain blackbox machine learning.
random-forest-importances - Code to compute permutation and drop-column importances in Python scikit-learn models
awesome-production-machine-learning - A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
anchor - Code for "High-Precision Model-Agnostic Explanations" paper