Transformer-MM-Explainability
shap
Transformer-MM-Explainability | shap | |
---|---|---|
3 | 38 | |
709 | 21,677 | |
- | 1.1% | |
0.0 | 9.3 | |
8 months ago | about 18 hours ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Transformer-MM-Explainability
shap
- Shap v0.45.0
-
[D] Convert a ML model into a rule based system
something like GitHub - shap/shap: A game theoretic approach to explain the output of any machine learning model.?
-
[P] tinyshap: A minimal implementation of the SHAP algorithm
A less than 100 lines of code implementation of KernelSHAP because I had a hard time understanding shap's code.
-
What’s after model adequacy?
We use tools like SHAP to explain what the model is doing to stakeholders.
- Feature importance with feature engineering?
-
Model interpretation with many features
https://github.com/slundberg/shap this or https://github.com/marcotcr/lime would be relevant to you, especially if you want to look at explaining a single prediction.
-
SHAP Value Interpretation
See this closed topic for more detail: https://github.com/slundberg/shap/issues/29
-
Christoph Molnar on SHAP Library
Dr. Molnar recently had a semi-viral post on LinkedIn and on Twitter, where he essentially highlights the booming popularity [and power] of using SHAP for explainable AI (which I agree with), but that it also comes with problems; i.e., the open source implementation has thousands of pull requests, bugs, and issues and yet there is no permanent or significant funding to go in and fix them.
-
Random Forest Estimation Question
Option 4) create SHAP values https://github.com/slundberg/shap to better understand what the RF did.
-
Model explainability
txtai pipelines are wrappers around Hugging Face pipelines with logic to easily integrate with txtai's workflow framework. Given that, we can use the SHAP library to explain predictions.
What are some alternatives?
pytorch-grad-cam - Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
TorchDrift - Drift Detection for your PyTorch Models
Transformer-Explainability - [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
explainerdashboard - Quickly build Explainable AI dashboards that show the inner workings of so-called "blackbox" machine learning models.
captum - Model interpretability and understanding for PyTorch
clip-italian - CLIP (Contrastive Language–Image Pre-training) for Italian
lime - Lime: Explaining the predictions of any machine learning classifier
pytea - PyTea: PyTorch Tensor shape error analyzer
interpret - Fit interpretable models. Explain blackbox machine learning.
WeightWatcher - The WeightWatcher tool for predicting the accuracy of Deep Neural Networks
awesome-production-machine-learning - A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning