SaaSHub helps you find the best software and product alternatives Learn more →
Interpret Alternatives
Similar projects and alternatives to interpret
-
shap
A game theoretic approach to explain the output of any machine learning model.
-
shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
-
InfluxDB
Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.
-
-
-
imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
-
decision-tree-classifier
Decision Tree Classifier and Boosted Random Forest
-
DashBot-3.0
Geometry Dash bot to play & finish levels - Now training much faster!
-
Onboard AI
Learn any GitHub repo in 59 seconds. Onboard AI learns any GitHub repo in minutes and lets you chat with it to locate functionality, understand different parts, and generate new code. Use it for free at www.getonboard.dev.
-
AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
-
-
sagemaker-explaining-credit-decisions
Amazon SageMaker Solution for explaining credit decisions.
interpret reviews and mentions
-
[D] Alternatives to the shap explainability package
Maybe InterpretML? It's developed and maintained by Microsoft Research and consolidates a lot of different explainability methods.
-
What Are the Most Important Statistical Ideas of the Past 50 Years?
You may also find Explainable Boosting Machines interesting: https://github.com/interpretml/interpret
They're a bit like a best of both worlds between linear models and random forests (generalized additive models fit with boosted decision trees)
Disclosure: I helped build this open source package
-
[N] Google confirms DeepMind Health Streams project has been killed off
Microsoft Explainable Boosting Machine (which is a Gaussian Additive Model and not a Gradient Boosted Trees 🙄 model) is a step in that direction https://github.com/interpretml/interpret
-
A note from our sponsor - #<SponsorshipServiceOld:0x00007f0f9b546918>
www.saashub.com | 9 Dec 2023
Stats
interpretml/interpret is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of interpret is C++.