alibi
interpret
alibi | interpret | |
---|---|---|
4 | 6 | |
2,289 | 5,998 | |
0.6% | 0.5% | |
7.7 | 9.7 | |
8 days ago | 7 days ago | |
Python | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alibi
- Alibi: Open-source Python lib for ML model inspection and interpretation
-
Ask HN: Who is hiring? (January 2022)
Seldon | Multiple positions | London/Cambridge UK | Onsite/Remote | Full time | seldon.io
At Seldon we are building industry leading solutions for deploying, monitoring, and explaining machine learning models. We are an open-core company with several successful open source projects like:
* https://github.com/SeldonIO/seldon-core
* https://github.com/SeldonIO/mlserver
* https://github.com/SeldonIO/alibi
* https://github.com/SeldonIO/alibi-detect
* https://github.com/SeldonIO/tempo
We are hiring for a range of positions, including software engineers(go, k8s), ml engineers (python, go), frontend engineers (js), UX designer, and product managers. All open positions can be found at https://www.seldon.io/careers/
- Ask HN: Who is hiring? (December 2021)
-
Best alternatives to 'shap' package?
Alibi explain might be an option depending on what you are looking for https://github.com/SeldonIO/alibi
interpret
-
[D] Alternatives to the shap explainability package
Maybe InterpretML? It's developed and maintained by Microsoft Research and consolidates a lot of different explainability methods.
-
What Are the Most Important Statistical Ideas of the Past 50 Years?
You may also find Explainable Boosting Machines interesting: https://github.com/interpretml/interpret
They're a bit like a best of both worlds between linear models and random forests (generalized additive models fit with boosted decision trees)
Disclosure: I helped build this open source package
-
[N] Google confirms DeepMind Health Streams project has been killed off
Microsoft Explainable Boosting Machine (which is a Gaussian Additive Model and not a Gradient Boosted Trees ๐ model) is a step in that direction https://github.com/interpretml/interpret
-
[Discussion] XGBoost is the way.
Also I'd recommend everyone who works with xgboost to give EBM's a try! They perform comparably (except in the case of extreme interactions) but are actually interpretable! https://github.com/interpretml/interpret/ Beside that they since on runtime they're practically a lookup table they're very quick (at the cost of longer training time).
-
[D] Generalized Additive Modelsโฆ with trees?
Open source code by Microsoft: https://github.com/interpretml/interpret (called EBM in this implementation).
-
Machine Learning with Medical Data (unbalanced dataset)
If it's not an image, have a go at Microsoft's Explainable Boosting Maching) https://github.com/interpretml/interpret which is not a GBM but a GAM (Gradient Boosting Machine vs Gradient Additive Model). This will also give you explanation via SHAP or LIME values.
What are some alternatives?
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
shap - A game theoretic approach to explain the output of any machine learning model.
CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
shapash - ๐ Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
conductor - Conductor is a microservices orchestration engine.
imodels - Interpretable ML package ๐ for concise, transparent, and accurate predictive modeling (sklearn-compatible).
MLServer - An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
medspacy - Library for clinical NLP with spaCy.
causallift - CausalLift: Python package for causality-based Uplift Modeling in real-world business
decision-tree-classifier - Decision Tree Classifier and Boosted Random Forest
MindsDB - The platform for customizing AI from enterprise data
DashBot-3.0 - Geometry Dash bot to play & finish levels - Now training much faster!