shap
lime
Our great sponsors
shap | lime | |
---|---|---|
38 | 14 | |
21,536 | 11,265 | |
1.6% | - | |
9.4 | 0.0 | |
10 days ago | 4 days ago | |
Jupyter Notebook | JavaScript | |
MIT License | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
shap
- Shap v0.45.0
-
[D] Convert a ML model into a rule based system
something like GitHub - shap/shap: A game theoretic approach to explain the output of any machine learning model.?
-
[P] tinyshap: A minimal implementation of the SHAP algorithm
A less than 100 lines of code implementation of KernelSHAP because I had a hard time understanding shap's code.
-
What’s after model adequacy?
We use tools like SHAP to explain what the model is doing to stakeholders.
- Feature importance with feature engineering?
-
Model interpretation with many features
https://github.com/slundberg/shap this or https://github.com/marcotcr/lime would be relevant to you, especially if you want to look at explaining a single prediction.
-
SHAP Value Interpretation
See this closed topic for more detail: https://github.com/slundberg/shap/issues/29
-
Christoph Molnar on SHAP Library
Dr. Molnar recently had a semi-viral post on LinkedIn and on Twitter, where he essentially highlights the booming popularity [and power] of using SHAP for explainable AI (which I agree with), but that it also comes with problems; i.e., the open source implementation has thousands of pull requests, bugs, and issues and yet there is no permanent or significant funding to go in and fix them.
-
Random Forest Estimation Question
Option 4) create SHAP values https://github.com/slundberg/shap to better understand what the RF did.
-
Model explainability
txtai pipelines are wrappers around Hugging Face pipelines with logic to easily integrate with txtai's workflow framework. Given that, we can use the SHAP library to explain predictions.
lime
-
Ethical and Bias Testing in Generative AI: A Practical Guide to Ensuring Ethical Conduct with Test Cases and Tools
Other tools like Fairness Indicators, Lime, and SHAP are also valuable resources for ethical and bias testing.
-
Government sets out 'adaptable' regulation for AI
A basic form that's useful, it's quite easy. I've used LIME a lot https://github.com/marcotcr/lime
-
Model interpretation with many features
https://github.com/slundberg/shap this or https://github.com/marcotcr/lime would be relevant to you, especially if you want to look at explaining a single prediction.
-
[P] Understanding LIME | Explainable AI
This is a nice brief introduction. Where you could improve is showing how each part of the presentation is mapped to code, so people can play around with it. My advice would be to link to the lime tutorials and fill in any gaps with notebooks of your own. If you can direct your viewers to be practice what you explain and also have safety nets where you explain common problems and solutions, you can differentiate your content from the dozens of other content creators explaining the same tools and concepts you are.
-
The cause of a decision in Swahili social media sentiments
In today's article, I will work with you through building a machine learning model for Swahili social media sentiment classification with the interpretability of each prediction of our final model using Local Interpretable Model-Agnostic Explanations.
-
"We need to take a pause," research scientist and physician Leo Anthony Celi from the Massachusetts Institute of Technology told the Boston Globe after learning that AI can predict people's race from X-ray images - Science Alert
There is a lot of tools out there that can assist them with that. Lime for example.
-
What are some cool error analysis tricks you've seen?
I'm a big fan of LIME https://github.com/marcotcr/lime and sampling random errors.
-
Cause of overfitting using vgg16 transfer learning
Or you could see what activates miss-classified labels (e.g. with LIME https://github.com/marcotcr/lime) and try to understand if there are some common causes (e.g. reflection, different lighting, background etc.).
- [Q] What's the community's opinion of "interpretable ML/AI"?
- GitHub - marcotcr/lime: Lime: Explaining the predictions of any machine learning classifier {Python}
What are some alternatives?
shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
eli5 - A library for debugging/inspecting machine learning classifiers and explaining their predictions
Transformer-Explainability - [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
anchor - Code for "High-Precision Model-Agnostic Explanations" paper
captum - Model interpretability and understanding for PyTorch
Fruit-Images-Dataset - Fruits-360: A dataset of images containing fruits and vegetables
interpret - Fit interpretable models. Explain blackbox machine learning.
Cause-of-decision-in-Swahili-sentiments - This repository special to demonstrate the cause of decision or explainability on classifying Swahili sentiments as a data professional for business needs.
awesome-production-machine-learning - A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
shap - A game theoretic approach to explain the output of any machine learning model. [Moved to: https://github.com/shap/shap]
lucid - A collection of infrastructure and tools for research in neural network interpretability.