Can you interrogate a machine learning model to find out why it gave certain predictions?

This page summarizes the projects mentioned and recommended in the original post on /r/MLQuestions

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • captum

    Model interpretability and understanding for PyTorch

  • Sometimes. If explainable predictions are part of your business requirements, it's probably better not to rely entirely on black box models and instead design a system that gives you the information you need as part of it. If you end up using black box models, there are still methods that attempt to help attribute explanations to your prediction. Here's an example of a toolkit for attributing explanations post-hoc to black box model predictions: https://github.com/pytorch/captum

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • [D] [R] Research Problem about Weakly Supervised Learning for CT Image Semantic Segmentation

    1 project | /r/MachineLearning | 24 Apr 2023
  • What kind of explainability techniques exist for Reinforcement learning?

    1 project | /r/reinforcementlearning | 24 Mar 2022
  • [D] How do you choose which Black-Box Explainability method to use?

    1 project | /r/MachineLearning | 30 Jan 2022
  • DeepLIFT or other explainable api implementations for JAX (like captum for pytorch)?

    1 project | /r/JAX | 10 Dec 2021
  • how to extract features from a (CNN) convolutional network having raw data with (XAI) explainable techinques?

    1 project | /r/deeplearning | 25 Oct 2021