[Q] What's the community's opinion of "interpretable ML/AI"?

This page summarizes the projects mentioned and recommended in the original post on /r/statistics

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • lime

    Lime: Explaining the predictions of any machine learning classifier (by marcotcr)

  • LIME (https://github.com/marcotcr/lime) and Anchor (https://github.com/marcotcr/anchor), both by Marco Tulio Ribeiro (https://homes.cs.washington.edu/~marcotcr/).

  • shap

    A game theoretic approach to explain the output of any machine learning model.

  • I've become a zealot about parametric stats, specifically from the Bayesian paradigm. Something about studying the core business problem, choosing the best distribution(s), and making inferences has been really rewarding for me. But increasingly, I'm seeing tools like SHAP, which allegedly enable users of black-box ML models to intuit what/how their models "think". (SHAP is just one example.)

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • anchor

    Code for "High-Precision Model-Agnostic Explanations" paper (by marcotcr)

  • LIME (https://github.com/marcotcr/lime) and Anchor (https://github.com/marcotcr/anchor), both by Marco Tulio Ribeiro (https://homes.cs.washington.edu/~marcotcr/).

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts