interpret
medspacy
interpret | medspacy | |
---|---|---|
6 | 2 | |
5,998 | 478 | |
0.5% | 1.9% | |
9.7 | 8.3 | |
10 days ago | 3 days ago | |
C++ | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
interpret
-
[D] Alternatives to the shap explainability package
Maybe InterpretML? It's developed and maintained by Microsoft Research and consolidates a lot of different explainability methods.
-
What Are the Most Important Statistical Ideas of the Past 50 Years?
You may also find Explainable Boosting Machines interesting: https://github.com/interpretml/interpret
They're a bit like a best of both worlds between linear models and random forests (generalized additive models fit with boosted decision trees)
Disclosure: I helped build this open source package
-
[N] Google confirms DeepMind Health Streams project has been killed off
Microsoft Explainable Boosting Machine (which is a Gaussian Additive Model and not a Gradient Boosted Trees π model) is a step in that direction https://github.com/interpretml/interpret
-
[Discussion] XGBoost is the way.
Also I'd recommend everyone who works with xgboost to give EBM's a try! They perform comparably (except in the case of extreme interactions) but are actually interpretable! https://github.com/interpretml/interpret/ Beside that they since on runtime they're practically a lookup table they're very quick (at the cost of longer training time).
-
[D] Generalized Additive Models⦠with trees?
Open source code by Microsoft: https://github.com/interpretml/interpret (called EBM in this implementation).
-
Machine Learning with Medical Data (unbalanced dataset)
If it's not an image, have a go at Microsoft's Explainable Boosting Maching) https://github.com/interpretml/interpret which is not a GBM but a GAM (Gradient Boosting Machine vs Gradient Additive Model). This will also give you explanation via SHAP or LIME values.
medspacy
-
Guidance needed: Extracting diseases and symptoms from medical text
https://github.com/medspacy/medspacy and https://allenai.github.io/scispacy/ should get you most of the way there
-
[N] Google confirms DeepMind Health Streams project has been killed off
Not to hand, but there are a few frameworks. The big one is cTAKES, but also fastumls. Uh I work with two others: LEO which is a fancy version of cTAKES and medspacy, which is a medical version of spacy, which is great. Bonus points: medspacy is in python. Disclaimer: I actually work on medspacy. https://github.com/medspacy/medspacy
What are some alternatives?
shap - A game theoretic approach to explain the output of any machine learning model.
spaCy - π« Industrial-strength Natural Language Processing (NLP) in Python
shapash - π Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
tf-transformers - State of the art faster Transformer with Tensorflow 2.0 ( NLP, Computer Vision, Audio ).
alibi - Algorithms for explaining machine learning models
scispacy - A full spaCy pipeline and models for scientific/biomedical documents.
imodels - Interpretable ML package π for concise, transparent, and accurate predictive modeling (sklearn-compatible).
practical-pytorch - Go to https://github.com/pytorch/tutorials - this repo is deprecated and no longer maintained
decision-tree-classifier - Decision Tree Classifier and Boosted Random Forest
course-nlp - A Code-First Introduction to NLP course
DashBot-3.0 - Geometry Dash bot to play & finish levels - Now training much faster!
TopMost - A Topic Modeling System Toolkit