Top 9 Python explainable-ml Projects
-
Project mention: Show HN: PostgresML, now with analytics and project management | news.ycombinator.com | 2022-05-02
-
Project mention: [R] Explaining the Explainable AI: A 2-Stage Approach - Link to a free online lecture by the author in comments | reddit.com/r/MachineLearning | 2022-03-20
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques https://arxiv.org/abs/1909.03012 https://github.com/Trusted-AI/AIX360
-
SonarQube
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.
-
Project mention: Twitter set to accept ‘best and final offer’ of Elon Musk | reddit.com/r/news | 2022-04-25
Which he will not do, because: a) He can't, it's a black box algorithm. It actually is open source already, but that doesn't mean much as it's useless without Twitter's data https://github.com/ModelOriented/DALEX b) He won't release data that shows the algorithm is racist and amplifies conservative and extremist content. He won't remove such functions because it will cost him billions.
-
Project mention: [R] The Shapley Value in Machine Learning | reddit.com/r/MachineLearning | 2022-02-25
Counter-factual and recourse-based explanations are alternative approach to model explanations. I used to work in a large financial institution, and we were researching whether counter-factual explanation methods would lead to better reason codes for adverse action notices.
-
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms (by carla-recourse)
Project mention: [R] CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms | reddit.com/r/MachineLearning | 2021-09-29Abstract: Counterfactual explanations provide means for prescriptive model explanations by suggesting actionable feature changes (e.g., increase income) that allow individuals to achieve favourable outcomes in the future (e.g., insurance approval). Choosing an appropriate method is a crucial aspect for meaningful counterfactual explanations. As documented in recent reviews, there exists a quickly growing literature with available methods. Yet, in the absence of widely available open–source implementations, the decision in favour of certain models is primarily based on what is readily available. Going forward – to guarantee meaningful comparisons across explanation methods – we present CARLA (Counterfactual And Recourse Library), a python library for benchmarking counterfactual explanation methods across both different data sets and different machine learning models. In summary, our work provides the following contributions: (i) an extensive benchmark of 11 popular counterfactual explanation methods, (ii) a benchmarking framework for research on future counterfactual explanation methods, and (iii) a standardized set of integrated evaluation measures and data sets for transparent and extensive comparisons of these methods. We have open sourced CARLA and our experimental results on GitHub, making them available as competitive baselines. We welcome contributions from other research groups and practitioners.
-
explainable-cnn
📦 PyTorch based visualization package for generating layer-wise explanations for CNNs.
-
shapley
The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).
Project mention: AstraZeneca Researchers Explain the Concept and Applications of the Shapley Value in Machine Learning | reddit.com/r/artificial | 2022-02-17Code for https://arxiv.org/abs/2202.05594 found: https://github.com/benedekrozemberczki/shapley
-
Scout APM
Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.
-
Project mention: Deploying a LightGBM classifier as a AWS Sagemaker endpoint? | reddit.com/r/mlops | 2021-08-25
Have looked at: https://sagemaker-examples.readthedocs.io/en/latest/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.html https://docs.aws.amazon.com/sagemaker/latest/dg/docker-containers-create.html https://sagemaker-immersionday.workshop.aws/lab3/option1.html The above don't specify LightGBM, but the concept of Bring Your Own Container/Algorithm is the same. I think this article might be more than what you need, buy it does reference LightGBM https://github.com/awslabs/sagemaker-explaining-credit-decisions And also this one https://github.com/aws-samples/amazon-sagemaker-script-mode/blob/master/lightgbm-byo/lightgbm-byo.ipynb
-
cnn-raccoon
Create interactive dashboards for your Convolutional Neural Networks with a single line of code!
Python explainable-ml related posts
- Deploying a LightGBM classifier as a AWS Sagemaker endpoint?
- University of Tübingen Researchers Open-Source ‘CARLA’, A Python Library for Benchmarking Counterfactual Explanation Methods Across Data Sets and Machine Learning Models
- Ask the Experts: AWS Data Science and ML Experts - - Mar 9th @ 8AM ET / 1PM GMT!
- CNN Racoon - Library for creating interactive dashboards for Convolutional Neural Networks
- CNN Racoon - Library for creating interactive dashboards for Convolutional Neural Networks
- CNN Racoon - Library for creating interactive dashboards for Convolutional Neural Networks
Index
What are some of the best open-source explainable-ml projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | MindsDB | 6,901 |
2 | AIX360 | 1,110 |
3 | DALEX | 1,042 |
4 | DiCE | 830 |
5 | CARLA | 166 |
6 | explainable-cnn | 164 |
7 | shapley | 163 |
8 | sagemaker-explaining-credit-decisions | 79 |
9 | cnn-raccoon | 30 |
Are you hiring? Post a new remote job listing for free.