How an ML algorithm shows which aspect of a comparison contributes more to the result?

This page summarizes the projects mentioned and recommended in the original post on /r/learnmachinelearning

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • lime

    Lime: Explaining the predictions of any machine learning classifier (by marcotcr)

  • LIME basically builds a bunch of really small linear models. It makes the assumption that although a model may be very complex overall, it can be locally be represented through linear relationships. LIME builds these linear models by picking a point and changing the data ever so slightly. Again, a good python implementation and more details can be found here.

  • shap

    A game theoretic approach to explain the output of any machine learning model.

  • If you are using more of a black-box method, two of the more common ways to determine how your dependent variables interact with your dependent variables are Shapley values and LIME. Shapley values are related to game theory from economics. Basically it attempts to answer how much each feature contributes to the predicted value compared to the average by looking at the average marginal contribution of a specific feature value across all potential combinations of feature values. A good python implementation and more details can be found here.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts