imodels
interpret
Our great sponsors
imodels | interpret | |
---|---|---|
7 | 6 | |
1,290 | 5,988 | |
- | 1.2% | |
8.5 | 9.7 | |
5 days ago | 6 days ago | |
Jupyter Notebook | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
imodels
-
[D] Have researchers given up on traditional machine learning methods?
- all domains requiring high interpretability absolutely ignore deep learning at all, and put all their research into traditional ML; see e.g. counterfactual examples, important interpretability methods in finance, or rule-based learning, important in medical or law applications
-
What would be my best approach given the data I have?
Next, this variable will be your target and you can use various supervised learning models to answer your question. Since interpretation is key, you can use something from here: https://github.com/csinva/imodels or do some black box models and use shab to understand which features contributed most.
-
Random Forest Estimation Question
Option 2) fit a model from https://github.com/csinva/imodels on the predicted values of the RF
-
UC Berkeley Researchers Introduce ‘imodels: A Python Package For Fitting Interpretable Machine Learning Models
Despite recent breakthroughs in the formulation and fitting of interpretable models, implementations are frequently challenging to locate, utilize, and compare. imodels solves this void by offering a single interface and implementation for a wide range of state-of-the-art interpretable modeling techniques, especially rule-based methods. imodels is basically a Python tool for predictive modeling that is simple, transparent, and accurate. It gives users a straightforward way to fit and use state-of-the-art interpretable models, all of which are compatible with scikit-learn (Pedregosa et al., 2011). These models can frequently replace black-box models while boosting interpretability and computing efficiency without compromising forecast accuracy. Continue Reading
-
[D] Looking for open source projects to contribute
Our package imodels is expanding our sklearn-compatible set of interpretable models and always looking for new contributors!
- imodels: a package extending sklearn with state-of-the-art models for interpretable data science (e.g. Bayesian Rule Lists, RuleFit)
- imodels: a package extending sklearn with state-of-the-art interpretable models (e.g. Bayesian Rule Lists, RuleFit) from BAIR [P]
interpret
-
[D] Alternatives to the shap explainability package
Maybe InterpretML? It's developed and maintained by Microsoft Research and consolidates a lot of different explainability methods.
-
What Are the Most Important Statistical Ideas of the Past 50 Years?
You may also find Explainable Boosting Machines interesting: https://github.com/interpretml/interpret
They're a bit like a best of both worlds between linear models and random forests (generalized additive models fit with boosted decision trees)
Disclosure: I helped build this open source package
-
[N] Google confirms DeepMind Health Streams project has been killed off
Microsoft Explainable Boosting Machine (which is a Gaussian Additive Model and not a Gradient Boosted Trees 🙄 model) is a step in that direction https://github.com/interpretml/interpret
-
[Discussion] XGBoost is the way.
Also I'd recommend everyone who works with xgboost to give EBM's a try! They perform comparably (except in the case of extreme interactions) but are actually interpretable! https://github.com/interpretml/interpret/ Beside that they since on runtime they're practically a lookup table they're very quick (at the cost of longer training time).
-
[D] Generalized Additive Models… with trees?
Open source code by Microsoft: https://github.com/interpretml/interpret (called EBM in this implementation).
-
Machine Learning with Medical Data (unbalanced dataset)
If it's not an image, have a go at Microsoft's Explainable Boosting Maching) https://github.com/interpretml/interpret which is not a GBM but a GAM (Gradient Boosting Machine vs Gradient Additive Model). This will also give you explanation via SHAP or LIME values.
What are some alternatives?
pycaret - An open-source, low-code machine learning library in Python
shap - A game theoretic approach to explain the output of any machine learning model.
shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
linear-tree - A python library to build Model Trees with Linear Models at the leaves.
alibi - Algorithms for explaining machine learning models
docarray - Represent, send, store and search multimodal data
medspacy - Library for clinical NLP with spaCy.
Mathematics-for-Machine-Learning-and-Data-Science-Specialization-Coursera - Mathematics for Machine Learning and Data Science Specialization - Coursera - deeplearning.ai - solutions and notes
decision-tree-classifier - Decision Tree Classifier and Boosted Random Forest
dopamine - Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.
DashBot-3.0 - Geometry Dash bot to play & finish levels - Now training much faster!