shap VS Transformer-Explainability

Compare shap vs Transformer-Explainability and see what are their differences.

Transformer-Explainability

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks. (by hila-chefer)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
shap Transformer-Explainability
38 1
21,580 1,660
1.8% -
9.4 0.0
4 days ago 3 months ago
Jupyter Notebook Jupyter Notebook
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

shap

Posts with mentions or reviews of shap. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-06.

Transformer-Explainability

Posts with mentions or reviews of Transformer-Explainability. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-25.
  • [Project] Recent Class Activation Map Methods for CNNs and Vision Transformers
    2 projects | /r/MachineLearning | 25 Apr 2021
    Not exactly the same but since you mentioned using ViT's attention outputs as a 2D feature map for the CAM you can consider this paper (Transformer Interpretability Beyond Attention Visualization) where they study the question of how to choose/mix the attention scores in a way that can be visualized (so similar to the CAMs). Maybe it can lead to better results. https://arxiv.org/abs/2012.09838 https://github.com/hila-chefer/Transformer-Explainability

What are some alternatives?

When comparing shap and Transformer-Explainability you can also consider the following projects:

shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models

pytorch-grad-cam - Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.

captum - Model interpretability and understanding for PyTorch

T2T-ViT - ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

lime - Lime: Explaining the predictions of any machine learning classifier

multi-label-sentiment-classifier - How to build a multi-label sentiment classifiers with Tez and PyTorch

interpret - Fit interpretable models. Explain blackbox machine learning.

HugsVision - HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision

awesome-production-machine-learning - A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning

tf-metal-experiments - TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)

anchor - Code for "High-Precision Model-Agnostic Explanations" paper

deep-text-recognition-benchmark - PyTorch code of my ICDAR 2021 paper Vision Transformer for Fast and Efficient Scene Text Recognition (ViTSTR)