Machine-Learning
interpret
Machine-Learning | interpret | |
---|---|---|
2 | 6 | |
86 | 6,007 | |
- | 0.6% | |
3.4 | 9.7 | |
5 months ago | 5 days ago | |
Python | C++ | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Machine-Learning
-
I published a Free & Open Source book to Learn Python 3. It includes a nice website for online reading and PDF for offline reading. Any feedback is highly appreciated.
Thank you for sharing! Am I the only one who never learned tu-ples, lists, dictionaries, arrays and so on yet able to write some rather sophisticated Python code without really understanding the data structures that I use? See my GitHub repository at https://github.com/VincentGranville/Machine-Learning, full of Python code. I play with data structures the same way I play with grammar in English: I do it successfully, without knowing the rules or the inner workings.
-
My New Machine Learning Dictionary: Which Terms Would You Add?
Top entries are in bold, and sub-entries are in italics. This dictionary is from my new book “Intuitive Machine Learning and Explainable AI”, available here and used as reference material for the course with the same name (see here). These entries are cross-referenced in the book to facilitate navigation, with backlinks to the pages where they appear. The index, also with clickable backlinks, is a more comprehensive listing with 300+ terms. Both the glossary and index are available in PDF format here on my GitHub repository, and of course with clickable links within the book.
interpret
-
[D] Alternatives to the shap explainability package
Maybe InterpretML? It's developed and maintained by Microsoft Research and consolidates a lot of different explainability methods.
-
What Are the Most Important Statistical Ideas of the Past 50 Years?
You may also find Explainable Boosting Machines interesting: https://github.com/interpretml/interpret
They're a bit like a best of both worlds between linear models and random forests (generalized additive models fit with boosted decision trees)
Disclosure: I helped build this open source package
-
[N] Google confirms DeepMind Health Streams project has been killed off
Microsoft Explainable Boosting Machine (which is a Gaussian Additive Model and not a Gradient Boosted Trees 🙄 model) is a step in that direction https://github.com/interpretml/interpret
-
[Discussion] XGBoost is the way.
Also I'd recommend everyone who works with xgboost to give EBM's a try! They perform comparably (except in the case of extreme interactions) but are actually interpretable! https://github.com/interpretml/interpret/ Beside that they since on runtime they're practically a lookup table they're very quick (at the cost of longer training time).
-
[D] Generalized Additive Models… with trees?
Open source code by Microsoft: https://github.com/interpretml/interpret (called EBM in this implementation).
-
Machine Learning with Medical Data (unbalanced dataset)
If it's not an image, have a go at Microsoft's Explainable Boosting Maching) https://github.com/interpretml/interpret which is not a GBM but a GAM (Gradient Boosting Machine vs Gradient Additive Model). This will also give you explanation via SHAP or LIME values.
What are some alternatives?
shap - A game theoretic approach to explain the output of any machine learning model.
shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
alibi - Algorithms for explaining machine learning models
imodels - Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
medspacy - Library for clinical NLP with spaCy.
decision-tree-classifier - Decision Tree Classifier and Boosted Random Forest
DashBot-3.0 - Geometry Dash bot to play & finish levels - Now training much faster!
AIF360 - A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
DALEX - moDel Agnostic Language for Exploration and eXplanation
yggdrasil-decision-forests - A library to train, evaluate, interpret, and productionize decision forest models such as Random Forest and Gradient Boosted Decision Trees.
sagemaker-explaining-credit-decisions - Amazon SageMaker Solution for explaining credit decisions.
DiCE - Generate Diverse Counterfactual Explanations for any machine learning model.