pytea
Transformer-MM-Explainability
pytea | Transformer-MM-Explainability | |
---|---|---|
3 | 3 | |
310 | 709 | |
0.3% | - | |
1.8 | 0.0 | |
about 2 years ago | 9 months ago | |
TypeScript | Jupyter Notebook | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytea
Transformer-MM-Explainability
What are some alternatives?
examples - A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
pytorch-grad-cam - Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
cleverhans - An adversarial example library for constructing attacks, building defenses, and benchmarking both
TorchDrift - Drift Detection for your PyTorch Models
uncertainty-toolbox - Uncertainty Toolbox: a Python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization
explainerdashboard - Quickly build Explainable AI dashboards that show the inner workings of so-called "blackbox" machine learning models.
WeightWatcher - The WeightWatcher tool for predicting the accuracy of Deep Neural Networks
shap - A game theoretic approach to explain the output of any machine learning model.
vivit - [TMLR 2022] Curvature access through the generalized Gauss-Newton's low-rank structure: Eigenvalues, eigenvectors, directional derivatives & Newton steps
clip-italian - CLIP (Contrastive LanguageāImage Pre-training) for Italian
delve - PyTorch model training and layer saturation monitor