DALEX
EthicML
DALEX | EthicML | |
---|---|---|
2 | 1 | |
1,323 | 24 | |
0.6% | - | |
5.5 | 9.3 | |
2 months ago | 2 days ago | |
Python | Python | |
GNU General Public License v3.0 only | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DALEX
-
Twitter set to accept ‘best and final offer’ of Elon Musk
Which he will not do, because: a) He can't, it's a black box algorithm. It actually is open source already, but that doesn't mean much as it's useless without Twitter's data https://github.com/ModelOriented/DALEX b) He won't release data that shows the algorithm is racist and amplifies conservative and extremist content. He won't remove such functions because it will cost him billions.
-
[D] What are your favorite Random Forest implementations that support categoricals
There are a couple of ways to use Shapley values for explanations in R. One way is to use DALEX, which also contains a lot of other methods besides SHAP. Another one is iml. I am sure there are several other implementations of SHAP as well.
EthicML
-
[R] An overview of some available Fairness Frameworks & Packages
These are all great tools. I found though that there wasn't one package with the flexibility of what we needed in my research group for work in this area, so we wrote EthicML. Some of you may also find it useful too.
What are some alternatives?
shapley - The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).
responsible-ai-toolbox - Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
captum - Model interpretability and understanding for PyTorch
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]
Lime-For-Time - Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification
pygod - A Python Library for Graph Outlier Detection (Anomaly Detection)
fairlearn - A Python package to assess and improve fairness of machine learning models.
LIME - Tutorial notebooks on explainable Machine Learning with LIME (Original work: https://arxiv.org/abs/1602.04938)
verifyml - Open-source toolkit to help companies implement responsible AI workflows.
catboost - A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
AIF360 - A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.