shap VS interpretable-ml-book

Compare shap vs interpretable-ml-book and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
shap interpretable-ml-book
38 36
21,580 4,673
1.8% -
9.4 4.7
6 days ago about 2 months ago
Jupyter Notebook Jupyter Notebook
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

shap

Posts with mentions or reviews of shap. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-06.

interpretable-ml-book

Posts with mentions or reviews of interpretable-ml-book. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-18.

What are some alternatives?

When comparing shap and interpretable-ml-book you can also consider the following projects:

shapash - ๐Ÿ”… Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models

stat_rethinking_2022 - Statistical Rethinking course winter 2022

Transformer-Explainability - [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.

machine-learning-yearning - Machine Learning Yearning book by ๐Ÿ…ฐ๏ธ๐“ท๐“ญ๐“ป๐“ฎ๐”€ ๐Ÿ†–

captum - Model interpretability and understanding for PyTorch

jina - โ˜๏ธ Build multimodal AI applications with cloud-native stack

lime - Lime: Explaining the predictions of any machine learning classifier

neural_regression_discontinuity - In this repository, I modify a quasi-experimental statistical procedure for time-series inference using convolutional long short-term memory networks.

interpret - Fit interpretable models. Explain blackbox machine learning.

random-forest-importances - Code to compute permutation and drop-column importances in Python scikit-learn models

awesome-production-machine-learning - A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning

anchor - Code for "High-Precision Model-Agnostic Explanations" paper