Jupyter Notebook Interpretability

Open-source Jupyter Notebook projects categorized as Interpretability

Top 11 Jupyter Notebook Interpretability Projects

  • shap

    A game theoretic approach to explain the output of any machine learning model.

  • Project mention: Shap v0.45.0 | news.ycombinator.com | 2024-03-08
  • lucid

    A collection of infrastructure and tools for research in neural network interpretability.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • shapash

    πŸ”… Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models

  • Project mention: GitHub - MAIF/shapash: Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models | /r/learnmachinelearning | 2023-06-26
  • imodels

    Interpretable ML package πŸ” for concise, transparent, and accurate predictive modeling (sklearn-compatible).

  • transformers-interpret

    Model explainability that works seamlessly with πŸ€— transformers. Explain your transformers model in just 2 lines of code.

  • Transformer-MM-Explainability

    [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

  • tcav

    Code for the TCAV ML interpretability project

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • diffusers-interpret

    Diffusers-Interpret πŸ€—πŸ§¨πŸ•΅οΈβ€β™€οΈ: Model explainability for πŸ€— Diffusers. Get explanations for your generated images.

  • kmeans-feature-importance

    Adding feature_importances_ property to sklearn.cluster.KMeans class

  • augmented-interpretable-models

    Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.

  • Vision-DiffMask

    Official PyTorch implementation of Vision DiffMask, a post-hoc interpretation method for vision models.

NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Jupyter Notebook Interpretability related posts

Index

What are some of the best open-source Interpretability projects in Jupyter Notebook? This list will help you:

Project Stars
1 shap 21,580
2 lucid 4,599
3 shapash 2,642
4 imodels 1,288
5 transformers-interpret 1,207
6 Transformer-MM-Explainability 701
7 tcav 615
8 diffusers-interpret 259
9 kmeans-feature-importance 61
10 augmented-interpretable-models 38
11 Vision-DiffMask 27

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com