AIF360
EthicML
Our great sponsors
AIF360 | EthicML | |
---|---|---|
6 | 1 | |
2,311 | 24 | |
2.3% | - | |
7.2 | 9.3 | |
9 days ago | 7 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AIF360
-
perspective off
o https://aif360.mybluemix.net/
- How to detect and tackle bias in my data?
-
Building a Responsible AI Solution - Principles into Practice
Besides the existing monitoring solution mentioned in the section above, we were also took inspiration from continuous integration and continuous delivery (CI/CD) testing tools like Jenkins and Circle CI, on the engineering front, and existing fairness libraries like Microsoft's Fairlearn and IMB's Fairness 360, on the machine learning side of things.
-
Hi Reddit! I'm Milena Pribic, Advisory Designer for AI and the global design representative for AI Ethics at IBM. Ask me anything about scaling ethical AI practices at a huge company!
My advice is to remember that bias comes into the process intentionally and unintentionally! Tools like AI Fairness 360 can help you mitigate that from a development/technical perspective: https://aif360.mybluemix.net/
- [R] What are some of the best research papers to look into for ML Bias
EthicML
-
[R] An overview of some available Fairness Frameworks & Packages
These are all great tools. I found though that there wasn't one package with the flexibility of what we needed in my research group for work in this area, so we wrote EthicML. Some of you may also find it useful too.
What are some alternatives?
fairlearn - A Python package to assess and improve fairness of machine learning models.
responsible-ai-toolbox - Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
DALEX - moDel Agnostic Language for Exploration and eXplanation
AIX360 - Interpretability and explainability of data and machine learning models
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]
interpret - Fit interpretable models. Explain blackbox machine learning.
pygod - A Python Library for Graph Outlier Detection (Anomaly Detection)
thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
model-card-toolkit - A toolkit that streamlines and automates the generation of model cards
verifyml - Open-source toolkit to help companies implement responsible AI workflows.