Suggest an alternative to

Transformer-MM-Explainability

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

A URL to the alternative repo (e.g. GitHub, GitLab)

Here you can share your experience with the project you are suggesting or its comparison with Transformer-MM-Explainability. Optional.

A valid email to send you a verification link when necessary or log in.