The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more β
Top 11 Jupyter Notebook Interpretability Projects
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
shapash
π Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
-
imodels
Interpretable ML package π for concise, transparent, and accurate predictive modeling (sklearn-compatible).
-
transformers-interpret
Model explainability that works seamlessly with π€ transformers. Explain your transformers model in just 2 lines of code.
-
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
diffusers-interpret
Diffusers-Interpret π€π§¨π΅οΈββοΈ: Model explainability for π€ Diffusers. Get explanations for your generated images.
-
augmented-interpretable-models
Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.
-
Vision-DiffMask
Official PyTorch implementation of Vision DiffMask, a post-hoc interpretation method for vision models.
Project mention: GitHub - MAIF/shapash: Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models | /r/learnmachinelearning | 2023-06-26
Jupyter Notebook Interpretability related posts
- Shap v0.45.0
- [D] Convert a ML model into a rule based system
- [P] tinyshap: A minimal implementation of the SHAP algorithm
- [R] VISION DIFFMASK: Faithful Interpretation of Vision Transformers with Differentiable Patch Masking
- Whatβs after model adequacy?
- Feature importance with feature engineering?
- Model interpretation with many features
-
A note from our sponsor - WorkOS
workos.com | 23 Apr 2024
Index
What are some of the best open-source Interpretability projects in Jupyter Notebook? This list will help you:
Project | Stars | |
---|---|---|
1 | shap | 21,580 |
2 | lucid | 4,599 |
3 | shapash | 2,642 |
4 | imodels | 1,288 |
5 | transformers-interpret | 1,207 |
6 | Transformer-MM-Explainability | 701 |
7 | tcav | 615 |
8 | diffusers-interpret | 259 |
9 | kmeans-feature-importance | 61 |
10 | augmented-interpretable-models | 38 |
11 | Vision-DiffMask | 27 |
Sponsored