interpretable-ml-book
shap
interpretable-ml-book | shap | |
---|---|---|
37 | 40 | |
4,827 | 23,203 | |
- | 1.0% | |
4.2 | 9.1 | |
7 days ago | 5 days ago | |
Jupyter Notebook | Jupyter Notebook | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
interpretable-ml-book
- Interpretable Machine Learning โ A Guide for Making Black Box Models Explainable
- A Guide to Making Black Box Models Interpretable
-
So much for AI
If you're a student, I'd recommend this book :https://christophm.github.io/interpretable-ml-book/
-
Best way to make a random forest more explainable (need to know which features are driving the prediction)
Pretty much everyone shows SHAP plots now. Definitely the way to go. Check out the Christoph Molnar book. https://christophm.github.io/interpretable-ml-book/
-
Is there another way to determine the effect of the features other than the inbuilt features importance and SHAP values? [Research] [Discussion]
Yes, there are many techniques beyond the two you listed. I suggest doing a survey of techniques (hint: explainable AI or XAI), starting with the following book: Interpretable Machine Learning.
-
Which industry/profession/tasks require an aggregate analysis of data representing different physical objects (And how would you call that?)
Ah, alright. It sounds like you're looking for interpretability so I'd suggest this amazing overview of it by Christoph Molnar. If you choose the right models, or the right way of interpreting those, it can help a ton in communicating not only your results, but also what you did to obtain them.
-
What skills do I need to really work on?
Not necessarily; decision trees, Naive Bayes, etc., are interpretable. I'd refer to Molnar--specifically his Interpretable Machine Learning text--if you are interested in that subject.
-
Random forest vs multiple regression to determine predictor importance.
Consulting something like Interpretable Machine Learning or the documentation of a package like the vip package would also be a really, really good place to start.
-
The Rashomon Effect Explained โ Does Truth Actually Exist? [13.46]
Just read a book called Interpretable Machine Learning which focuses on analyzing ML models and determine which inputs has more impact in the result.
- Interpretable Machine Learning
shap
- IA Explicable: Algoritmos y Mรฉtodos para Interpretar Modelos de Caja Negra
-
Extracting Concepts from GPT-4
How does this compare to or improve on applying something like SHAP[0][1] on a model?
[0] https://github.com/shap/shap
- Shap v0.45.0
-
[D] Convert a ML model into a rule based system
something like GitHub - shap/shap: A game theoretic approach to explain the output of any machine learning model.?
-
[P] tinyshap: A minimal implementation of the SHAP algorithm
A less than 100 lines of code implementation of KernelSHAP because I had a hard time understanding shap's code.
-
Whatโs after model adequacy?
We use tools like SHAP to explain what the model is doing to stakeholders.
- Feature importance with feature engineering?
-
Model interpretation with many features
https://github.com/slundberg/shap this or https://github.com/marcotcr/lime would be relevant to you, especially if you want to look at explaining a single prediction.
-
SHAP Value Interpretation
See this closed topic for more detail: https://github.com/slundberg/shap/issues/29
-
Christoph Molnar on SHAP Library
Dr. Molnar recently had a semi-viral post on LinkedIn and on Twitter, where he essentially highlights the booming popularity [and power] of using SHAP for explainable AI (which I agree with), but that it also comes with problems; i.e., the open source implementation has thousands of pull requests, bugs, and issues and yet there is no permanent or significant funding to go in and fix them.
What are some alternatives?
stat_rethinking_2022 - Statistical Rethinking course winter 2022
shapash - ๐ Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
machine-learning-yearning - Machine Learning Yearning book by ๐ ฐ๏ธ๐ท๐ญ๐ป๐ฎ๐ ๐
Transformer-Explainability - [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
neural_regression_discontinuity - In this repository, I modify a quasi-experimental statistical procedure for time-series inference using convolutional long short-term memory networks.
captum - Model interpretability and understanding for PyTorch
serve - โ๏ธ Build multimodal AI applications with cloud-native stack
lime - Lime: Explaining the predictions of any machine learning classifier
random-forest-importances - Code to compute permutation and drop-column importances in Python scikit-learn models
interpret - Fit interpretable models. Explain blackbox machine learning.
awesome-production-machine-learning - A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
anchor - Code for "High-Precision Model-Agnostic Explanations" paper