AIX360
AIF360
AIX360 | AIF360 | |
---|---|---|
2 | 6 | |
1,533 | 2,316 | |
2.0% | 1.3% | |
8.2 | 7.2 | |
2 months ago | 14 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AIX360
- [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process?
-
[R] Explaining the Explainable AI: A 2-Stage Approach - Link to a free online lecture by the author in comments
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques https://arxiv.org/abs/1909.03012 https://github.com/Trusted-AI/AIX360
AIF360
-
perspective off
o https://aif360.mybluemix.net/
- How to detect and tackle bias in my data?
-
Building a Responsible AI Solution - Principles into Practice
Besides the existing monitoring solution mentioned in the section above, we were also took inspiration from continuous integration and continuous delivery (CI/CD) testing tools like Jenkins and Circle CI, on the engineering front, and existing fairness libraries like Microsoft's Fairlearn and IMB's Fairness 360, on the machine learning side of things.
-
Hi Reddit! I'm Milena Pribic, Advisory Designer for AI and the global design representative for AI Ethics at IBM. Ask me anything about scaling ethical AI practices at a huge company!
My advice is to remember that bias comes into the process intentionally and unintentionally! Tools like AI Fairness 360 can help you mitigate that from a development/technical perspective: https://aif360.mybluemix.net/
- [R] What are some of the best research papers to look into for ML Bias
What are some alternatives?
explainable-cnn - 📦 PyTorch based visualization package for generating layer-wise explanations for CNNs.
fairlearn - A Python package to assess and improve fairness of machine learning models.
cleverhans - An adversarial example library for constructing attacks, building defenses, and benchmarking both
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
DiCE - Generate Diverse Counterfactual Explanations for any machine learning model.
interpret - Fit interpretable models. Explain blackbox machine learning.
awesome-shapley-value - Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)
thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
backpack - BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.
model-card-toolkit - A toolkit that streamlines and automates the generation of model cards
DALEX - moDel Agnostic Language for Exploration and eXplanation
verifyml - Open-source toolkit to help companies implement responsible AI workflows.