AIF360
fairlearn
Our great sponsors
AIF360 | fairlearn | |
---|---|---|
6 | 6 | |
2,311 | 1,795 | |
2.3% | 2.3% | |
7.2 | 8.0 | |
9 days ago | 25 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AIF360
-
perspective off
o https://aif360.mybluemix.net/
- How to detect and tackle bias in my data?
-
Building a Responsible AI Solution - Principles into Practice
Besides the existing monitoring solution mentioned in the section above, we were also took inspiration from continuous integration and continuous delivery (CI/CD) testing tools like Jenkins and Circle CI, on the engineering front, and existing fairness libraries like Microsoft's Fairlearn and IMB's Fairness 360, on the machine learning side of things.
-
Hi Reddit! I'm Milena Pribic, Advisory Designer for AI and the global design representative for AI Ethics at IBM. Ask me anything about scaling ethical AI practices at a huge company!
My advice is to remember that bias comes into the process intentionally and unintentionally! Tools like AI Fairness 360 can help you mitigate that from a development/technical perspective: https://aif360.mybluemix.net/
- [R] What are some of the best research papers to look into for ML Bias
fairlearn
- Fairlearn
-
Open source projects to work on AI bias
I'm involved in the Fairlearn project, and we always love getting new contributors. We have a bunch of open issues, ranging from new functionality to writing documentation, so feel free to take a look and see if there is something you would like to work on.
-
In your experience, are AI Ethics teams valuable/effective? [D]
I'm involved with the Fairlearn project, so once I figure out what's necessary from a company policy-side, my plan is to incorporate these methods into Fairlearn one day.
-
Building a Responsible AI Solution - Principles into Practice
Besides the existing monitoring solution mentioned in the section above, we were also took inspiration from continuous integration and continuous delivery (CI/CD) testing tools like Jenkins and Circle CI, on the engineering front, and existing fairness libraries like Microsoft's Fairlearn and IMB's Fairness 360, on the machine learning side of things.
-
Ideas on how to use my data skills for a good cause?
Another commenter mentioned contributing to open-source tools. If you're interested in going that route, I'm involved in the Fairlearn project, and we could always benefit from a good data engineer.
What are some alternatives?
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
verifyml - Open-source toolkit to help companies implement responsible AI workflows.
AIX360 - Interpretability and explainability of data and machine learning models
model-card-toolkit - A toolkit that streamlines and automates the generation of model cards
interpret - Fit interpretable models. Explain blackbox machine learning.
Jenkins - Jenkins automation server
thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
EthicML - Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
clai - Command Line Artificial Intelligence or CLAI is an open-sourced project from IBM Research aimed to bring the power of AI to the command line interface.