SaaSHub helps you find the best software and product alternatives Learn more β
Top 23 explainable-ml Open-Source Projects
-
pytorch-grad-cam
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
shapash
π Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
-
awesome-explainable-graph-reasoning
A collection of research papers and software related to explainability in graph machine learning.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
imodels
Interpretable ML package π for concise, transparent, and accurate predictive modeling (sklearn-compatible).
-
responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
-
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms (by carla-recourse)
-
explainable-cnn
π¦ PyTorch based visualization package for generating layer-wise explanations for CNNs.
-
shapley
The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).
-
Deep_XF
Package towards building Explainable Forecasting and Nowcasting Models with State-of-the-art Deep Neural Networks and Dynamic Factor Model on Time Series data sets with single line of code. Also, provides utilify facility for time-series signal similarities matching, and removing noise from timeseries signals.
-
TalkToModel
TalkToModel gives anyone with the powers of XAI through natural language conversations π¬!
-
SegGradCAM
SEG-GRAD-CAM: Interpretable Semantic Segmentation via Gradient-Weighted Class Activation Mapping
-
cnn-raccoon
Create interactive dashboards for your Convolutional Neural Networks with a single line of code!
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
For the two examples we will be looking at, we will be using pytorch_grad_cam, an incredible open source package that makes working with GradCam very easy. There are excellent other tutorials to check out on the repo as well.
Project mention: GitHub - MAIF/shapash: Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models | /r/learnmachinelearning | 2023-06-26
Retrieval using a single vector is called dense passage retrieval (DPR), because an entire passage (dozens to hundreds of tokens) is encoded as a single vector. ColBERT instead encodes a vector-per-token, where each vector is influenced by surrounding context. This leads to meaningfully better results; for example, hereβs ColBERT running on Astra DB compared to DPR using openai-v3-small vectors, compared with TruLens for the Braintrust Coda Help Desk data set. ColBERT easily beats DPR at correctness, context relevance, and groundedness.
Project mention: Xplique Is a Neural Networks Explainability Toolbox | news.ycombinator.com | 2024-02-02
Just for some respite from the discussion of our soon-to-be AI overlords (LLMs), I'm one of the contributors to an open-source Python package, Xplainable (https://github.com/xplainable/xplainable). Xplainable is a novel (structured) machine learning algorithm that's inherently explainable, as opposed to being a post-hoc explainer (like SHAP or Lime).
explainable-ml related posts
- Why Vector Compression Matters
- Explainable (Structured) Machine Learning Algorithm
- How are generative AI companies monitoring their systems in production?
- [P] TruLens-Eval is an open source project for eval & tracking LLM experiments.
- Deep_XF: NEW Data - star count:100.0
- Deep_XF: NEW Data - star count:100.0
- GitHub - MAIF/shapash: Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
-
A note from our sponsor - SaaSHub
www.saashub.com | 27 Apr 2024
Index
What are some of the best open-source explainable-ml projects? This list will help you:
Project | Stars | |
---|---|---|
1 | pytorch-grad-cam | 9,410 |
2 | interpret | 5,988 |
3 | shapash | 2,642 |
4 | awesome-explainable-graph-reasoning | 1,934 |
5 | trulens | 1,612 |
6 | AIX360 | 1,527 |
7 | DALEX | 1,318 |
8 | imodels | 1,290 |
9 | DiCE | 1,267 |
10 | responsible-ai-toolbox | 1,208 |
11 | OmniXAI | 805 |
12 | xplique | 569 |
13 | tf-keras-vis | 305 |
14 | CARLA | 263 |
15 | explainable-cnn | 212 |
16 | shapley | 210 |
17 | awesome-shapley-value | 127 |
18 | Deep_XF | 110 |
19 | TalkToModel | 102 |
20 | sagemaker-explaining-credit-decisions | 94 |
21 | SegGradCAM | 94 |
22 | xplainable | 52 |
23 | cnn-raccoon | 31 |
Sponsored