pytea
AIX360
pytea | AIX360 | |
---|---|---|
3 | 2 | |
310 | 1,533 | |
0.3% | 2.0% | |
1.8 | 8.2 | |
about 2 years ago | 2 months ago | |
TypeScript | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytea
AIX360
- [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process?
-
[R] Explaining the Explainable AI: A 2-Stage Approach - Link to a free online lecture by the author in comments
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques https://arxiv.org/abs/1909.03012 https://github.com/Trusted-AI/AIX360
What are some alternatives?
examples - A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
AIF360 - A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
cleverhans - An adversarial example library for constructing attacks, building defenses, and benchmarking both
explainable-cnn - 📦 PyTorch based visualization package for generating layer-wise explanations for CNNs.
uncertainty-toolbox - Uncertainty Toolbox: a Python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization
WeightWatcher - The WeightWatcher tool for predicting the accuracy of Deep Neural Networks
DiCE - Generate Diverse Counterfactual Explanations for any machine learning model.
vivit - [TMLR 2022] Curvature access through the generalized Gauss-Newton's low-rank structure: Eigenvalues, eigenvectors, directional derivatives & Newton steps
awesome-shapley-value - Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)
Transformer-MM-Explainability - [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
backpack - BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.