Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today. Learn more →
Shap Alternatives
Similar projects and alternatives to shap
-
-
shapash
🔅 Shapash makes Machine Learning models transparent and understandable by everyone
-
SonarLint
Clean code begins in your IDE with SonarLint. Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today.
-
Transformer-Explainability
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
-
-
-
lucid
A collection of infrastructure and tools for research in neural network interpretability.
-
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
-
awesome-production-machine-learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
-
jellyfish
🪼 a python library for doing approximate and phonetic matching of strings.
-
-
xbyak
a JIT assembler for x86(IA-32)/x64(AMD64, x86-64) MMX/SSE/SSE2/SSE3/SSSE3/SSE4/FPU/AVX/AVX2/AVX-512 by C++ header
-
imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
-
articulated-animation
Code for Motion Representations for Articulated Animation paper
shap reviews and mentions
-
Model interpretation with many features
https://github.com/slundberg/shap this or https://github.com/marcotcr/lime would be relevant to you, especially if you want to look at explaining a single prediction.
-
Random Forest Estimation Question
Option 4) create SHAP values https://github.com/slundberg/shap to better understand what the RF did.
-
What Are the Most Important Statistical Ideas of the Past 50 Years?
Seconding Chris Molnar's excellent writeup. I also find the readme & example notebooks in Scott Lundberg's github repo to be a great way to get started. There are also references there for the original papers, which are surprisingly readable, imo. https://github.com/slundberg/shap
-
[Q] What's the community's opinion of "interpretable ML/AI"?
I've become a zealot about parametric stats, specifically from the Bayesian paradigm. Something about studying the core business problem, choosing the best distribution(s), and making inferences has been really rewarding for me. But increasingly, I'm seeing tools like SHAP, which allegedly enable users of black-box ML models to intuit what/how their models "think". (SHAP is just one example.)
- Looking into the "black box" of a neural network
-
Comparing Strings (Street Names) With Machine Learning
As more features are added to a model, the longer it will take to make a prediction. To help you find a suitable set of features, I have two suggestions, (1) recursive feature selection and (2) SHAP values. Using either of these methods can save you time as you find the right set of features for your model.
- [D] Has anyone ever used the SHAP and LIME models in machine learning?
-
How an ML algorithm shows which aspect of a comparison contributes more to the result?
If you are using more of a black-box method, two of the more common ways to determine how your dependent variables interact with your dependent variables are Shapley values and LIME. Shapley values are related to game theory from economics. Basically it attempts to answer how much each feature contributes to the predicted value compared to the average by looking at the average marginal contribution of a specific feature value across all potential combinations of feature values. A good python implementation and more details can be found here.
-
Jim Keller moves to AI chip startup
Their marketing material states: "Facilitating machines to go beyond pattern recognition and into cause-and-effect learning".
I wonder what they are referring to. Are they accelerating what SHAP's GradientExplainer [1] does? (namely: crafting inputs at a specific layer, propagating forward to see the influence on class prediction, and sort of backpropagating to pixels) Or is it about something more related to Judea Pearl's work on causality?
[1] https://github.com/slundberg/shap#deep-learning-example-with...
-
A note from our sponsor - SonarLint
www.sonarlint.org | 27 Mar 2023
Stats
slundberg/shap is an open source project licensed under MIT License which is an OSI approved license.