EthicML
Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency (by wearepal)
AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models. (by Trusted-AI)
Our great sponsors
EthicML | AIF360 | |
---|---|---|
1 | 6 | |
24 | 2,305 | |
- | 2.0% | |
9.3 | 7.3 | |
3 days ago | 17 days ago | |
Python | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
EthicML
Posts with mentions or reviews of EthicML.
We have used some of these posts to build our list of alternatives
and similar projects.
-
[R] An overview of some available Fairness Frameworks & Packages
These are all great tools. I found though that there wasn't one package with the flexibility of what we needed in my research group for work in this area, so we wrote EthicML. Some of you may also find it useful too.
AIF360
Posts with mentions or reviews of AIF360.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-01-10.
-
perspective off
o https://aif360.mybluemix.net/
- How to detect and tackle bias in my data?
-
Building a Responsible AI Solution - Principles into Practice
Besides the existing monitoring solution mentioned in the section above, we were also took inspiration from continuous integration and continuous delivery (CI/CD) testing tools like Jenkins and Circle CI, on the engineering front, and existing fairness libraries like Microsoft's Fairlearn and IMB's Fairness 360, on the machine learning side of things.
-
Hi Reddit! I'm Milena Pribic, Advisory Designer for AI and the global design representative for AI Ethics at IBM. Ask me anything about scaling ethical AI practices at a huge company!
My advice is to remember that bias comes into the process intentionally and unintentionally! Tools like AI Fairness 360 can help you mitigate that from a development/technical perspective: https://aif360.mybluemix.net/
- [R] What are some of the best research papers to look into for ML Bias
What are some alternatives?
When comparing EthicML and AIF360 you can also consider the following projects:
responsible-ai-toolbox - Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
fairlearn - A Python package to assess and improve fairness of machine learning models.