responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions. (by microsoft)
DALEX
moDel Agnostic Language for Exploration and eXplanation (by ModelOriented)
Our great sponsors
responsible-ai-toolbox | DALEX | |
---|---|---|
2 | 2 | |
1,208 | 1,323 | |
6.1% | 1.0% | |
9.6 | 5.5 | |
11 days ago | 2 months ago | |
TypeScript | Python | |
MIT License | GNU General Public License v3.0 only |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
responsible-ai-toolbox
Posts with mentions or reviews of responsible-ai-toolbox.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Debugging Machine Learning "[N]"
http://erroranalysis.ai/ is a new open-source tool for in-depth understanding and diagnosis of Machine Learning Errors. The tool is available as a highly interactive jupyter widget and brings in several visualization primitives all centered around debugging activities in ML (main repo: https://github.com/microsoft/responsible-ai-widgets).
-
[N] Visualize Your Model Errors! Microsoft Toolkit Identifies and Diagnoses ML Failures
The toolkit is on the project GitHub. Additional information is available on the Error Analysis website.
DALEX
Posts with mentions or reviews of DALEX.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-04-25.
-
Twitter set to accept ‘best and final offer’ of Elon Musk
Which he will not do, because: a) He can't, it's a black box algorithm. It actually is open source already, but that doesn't mean much as it's useless without Twitter's data https://github.com/ModelOriented/DALEX b) He won't release data that shows the algorithm is racist and amplifies conservative and extremist content. He won't remove such functions because it will cost him billions.
-
[D] What are your favorite Random Forest implementations that support categoricals
There are a couple of ways to use Shapley values for explanations in R. One way is to use DALEX, which also contains a lot of other methods besides SHAP. Another one is iml. I am sure there are several other implementations of SHAP as well.
What are some alternatives?
When comparing responsible-ai-toolbox and DALEX you can also consider the following projects:
EthicML - Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
shapley - The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).