pytorch-grad-cam VS Transformer-Explainability

Compare pytorch-grad-cam vs Transformer-Explainability and see what are their differences.

Transformer-Explainability

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks. (by hila-chefer)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
pytorch-grad-cam Transformer-Explainability
5 1
9,351 1,656
- -
5.4 0.0
about 1 month ago 3 months ago
Python Jupyter Notebook
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

pytorch-grad-cam

Posts with mentions or reviews of pytorch-grad-cam. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-13.

Transformer-Explainability

Posts with mentions or reviews of Transformer-Explainability. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-25.
  • [Project] Recent Class Activation Map Methods for CNNs and Vision Transformers
    2 projects | /r/MachineLearning | 25 Apr 2021
    Not exactly the same but since you mentioned using ViT's attention outputs as a 2D feature map for the CAM you can consider this paper (Transformer Interpretability Beyond Attention Visualization) where they study the question of how to choose/mix the attention scores in a way that can be visualized (so similar to the CAMs). Maybe it can lead to better results. https://arxiv.org/abs/2012.09838 https://github.com/hila-chefer/Transformer-Explainability

What are some alternatives?

When comparing pytorch-grad-cam and Transformer-Explainability you can also consider the following projects:

pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]

shap - A game theoretic approach to explain the output of any machine learning model.

pytorch-CycleGAN-and-pix2pix - Image-to-Image Translation in PyTorch

T2T-ViT - ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

tf-keras-vis - Neural network visualization toolkit for tf.keras

multi-label-sentiment-classifier - How to build a multi-label sentiment classifiers with Tez and PyTorch

Transformer-MM-Explainability - [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

HugsVision - HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision

pytorch-tutorial - PyTorch Tutorial for Deep Learning Researchers

tf-metal-experiments - TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)

Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time

deep-text-recognition-benchmark - PyTorch code of my ICDAR 2021 paper Vision Transformer for Fast and Efficient Scene Text Recognition (ViTSTR)