sagemaker-explaining-credit-decisions
interpret
Our great sponsors
sagemaker-explaining-credit-decisions | interpret | |
---|---|---|
2 | 6 | |
94 | 5,998 | |
- | 1.4% | |
2.4 | 9.7 | |
12 months ago | 6 days ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sagemaker-explaining-credit-decisions
-
Deploying a LightGBM classifier as a AWS Sagemaker endpoint?
Have looked at: https://sagemaker-examples.readthedocs.io/en/latest/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.html https://docs.aws.amazon.com/sagemaker/latest/dg/docker-containers-create.html https://sagemaker-immersionday.workshop.aws/lab3/option1.html The above don't specify LightGBM, but the concept of Bring Your Own Container/Algorithm is the same. I think this article might be more than what you need, buy it does reference LightGBM https://github.com/awslabs/sagemaker-explaining-credit-decisions And also this one https://github.com/aws-samples/amazon-sagemaker-script-mode/blob/master/lightgbm-byo/lightgbm-byo.ipynb
-
Ask the Experts: AWS Data Science and ML Experts - - Mar 9th @ 8AM ET / 1PM GMT!
Yes, SageMaker is an end to end service covering data labeling, data preparation, model training, model deployment, model monitoring, etc. This recent video will give you a grand hands-on tour of SageMaker: https://www.twitch.tv/aws/video/929163653. Training and deployment is based on Docker containers, either built-in (algorithms and open source frameworks) or your own. With respect to LightGBM, you can easily start from the built-in scikit-learn container, and add LightGBM to it. Here's a complete example: https://github.com/awslabs/sagemaker-explaining-credit-decisions.
interpret
-
[D] Alternatives to the shap explainability package
Maybe InterpretML? It's developed and maintained by Microsoft Research and consolidates a lot of different explainability methods.
-
What Are the Most Important Statistical Ideas of the Past 50 Years?
You may also find Explainable Boosting Machines interesting: https://github.com/interpretml/interpret
They're a bit like a best of both worlds between linear models and random forests (generalized additive models fit with boosted decision trees)
Disclosure: I helped build this open source package
-
[N] Google confirms DeepMind Health Streams project has been killed off
Microsoft Explainable Boosting Machine (which is a Gaussian Additive Model and not a Gradient Boosted Trees ๐ model) is a step in that direction https://github.com/interpretml/interpret
-
[Discussion] XGBoost is the way.
Also I'd recommend everyone who works with xgboost to give EBM's a try! They perform comparably (except in the case of extreme interactions) but are actually interpretable! https://github.com/interpretml/interpret/ Beside that they since on runtime they're practically a lookup table they're very quick (at the cost of longer training time).
-
[D] Generalized Additive Modelsโฆ with trees?
Open source code by Microsoft: https://github.com/interpretml/interpret (called EBM in this implementation).
-
Machine Learning with Medical Data (unbalanced dataset)
If it's not an image, have a go at Microsoft's Explainable Boosting Maching) https://github.com/interpretml/interpret which is not a GBM but a GAM (Gradient Boosting Machine vs Gradient Additive Model). This will also give you explanation via SHAP or LIME values.
What are some alternatives?
DALEX - moDel Agnostic Language for Exploration and eXplanation
shap - A game theoretic approach to explain the output of any machine learning model.
shapley - The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).
shapash - ๐ Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
alibi - Algorithms for explaining machine learning models
MindsDB - The platform for customizing AI from enterprise data
imodels - Interpretable ML package ๐ for concise, transparent, and accurate predictive modeling (sklearn-compatible).
amazon-sagemaker-script-mode - Amazon SageMaker examples for prebuilt framework mode containers, a.k.a. Script Mode, and more (BYO containers and models etc.)
medspacy - Library for clinical NLP with spaCy.
decision-tree-classifier - Decision Tree Classifier and Boosted Random Forest
DashBot-3.0 - Geometry Dash bot to play & finish levels - Now training much faster!