The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →
Top 23 Interpretability Open-Source Projects
-
awesome-production-machine-learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
pytorch-grad-cam
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
-
shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
Awesome-Federated-Learning
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
-
imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
-
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
-
responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
-
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
-
decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
-
yggdrasil-decision-forests
A library to train, evaluate, interpret, and productionize decision forest models such as Random Forest and Gradient Boosted Decision Trees.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Project mention: Exploring Open-Source Alternatives to Landing AI for Robust MLOps | dev.to | 2023-12-13One trove of treasures is the awesome-production-machine-learning repository on GitHub. This curated list provides a multitude of frameworks, libraries, and software designed to facilitate various stages of the ML lifecycle.
For the two examples we will be looking at, we will be using pytorch_grad_cam, an incredible open source package that makes working with GradCam very easy. There are excellent other tutorials to check out on the repo as well.
Project mention: GitHub - MAIF/shapash: Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models | /r/learnmachinelearning | 2023-06-26
Project mention: Alibi: Open-source Python lib for ML model inspection and interpretation | news.ycombinator.com | 2023-06-02
https://arxiv.org/abs/2404.03592 https://github.com/stanfordnlp/pyreft
Project mention: Xplique Is a Neural Networks Explainability Toolbox | news.ycombinator.com | 2024-02-02
Project mention: Why do tree-based models still outperform deep learning on tabular data? (2022) | news.ycombinator.com | 2024-03-05Is it this library https://github.com/google/yggdrasil-decision-forests ?
Interpretability related posts
- penzai: JAX research toolkit for building, editing, and visualizing neural nets
- Shap v0.45.0
- Exploring GradCam and More with FiftyOne
- [D] Convert a ML model into a rule based system
- Safety in Deep Reinforcement Learning
- Adversarial Reinforcement Learning
- Adversarial Reinforcement Learning
-
A note from our sponsor - WorkOS
workos.com | 29 Apr 2024
Index
What are some of the best open-source Interpretability projects? This list will help you:
Project | Stars | |
---|---|---|
1 | shap | 21,632 |
2 | awesome-production-machine-learning | 15,947 |
3 | pytorch-grad-cam | 9,456 |
4 | interpret | 5,998 |
5 | lucid | 4,599 |
6 | captum | 4,568 |
7 | shapash | 2,642 |
8 | alibi | 2,289 |
9 | Awesome-Federated-Learning | 1,848 |
10 | penzai | 1,334 |
11 | DALEX | 1,323 |
12 | imodels | 1,290 |
13 | transformers-interpret | 1,212 |
14 | responsible-ai-toolbox | 1,208 |
15 | tf-explain | 1,007 |
16 | Transformer-MM-Explainability | 704 |
17 | decision-forests | 650 |
18 | tcav | 616 |
19 | pyreft | 610 |
20 | xplique | 569 |
21 | FastTreeSHAP | 493 |
22 | pyvene | 462 |
23 | yggdrasil-decision-forests | 423 |
Sponsored