How to Detect AI Hallucinations

This page summarizes the projects mentioned and recommended in the original post on dev.to

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • AI

    Explore the forefront of AI innovation with this dedicated repository, housing cutting-edge examples and implementations. Dive into the latest advancements, stay ahead with groundbreaking applications, and harness the power of state-of-the-art models and techniques. Elevate your understanding of artificial intelligence through hands-on work (by vishalmysore)

  • Code for this article is available here and here , but as always I would suggest you to please read the full article for better understanding.

  • hallucination-leaderboard

    Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents

  • To checkout the Hallucination leaderboard click here

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • selfcheckgpt

    SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models

  • SelfCheckGPT is a method designed for detecting hallucinations in Large Language Models (LLMs) without the need for external resources. It evaluates the consistency of responses generated by LLMs to determine if the information provided is factual or not. SelfCheckGPT outperforms other methods and serves as a strong baseline for assessing the reliability of LLM-generated text. It works by comparing multiple responses generated by a Large Language Model (LLM) in response to a query. It measures the consistency between these responses to determine if the information provided is factual or hallucinated. By sampling multiple responses, SelfCheckGPT can identify inconsistencies and contradictions, indicating potential hallucinations in the generated text. This method does not require external databases and can be used for black-box models, making it a versatile tool for detecting unreliable information generated by LLMs. One of the main features of SelfCheckGPT is MQAG which stands for Multiple-choice Question Answering and Generation. It evaluates information consistency between source and summary using multiple-choice questions. Consists of question generation stage, statistical distance analysis, and answerability threshold setting. Uses total variation as the main statistical distance for comparison. Provides a novel approach to assess information content in summaries through question answering. Comparing Multiple Responses: SelfCheckGPT compares multiple responses generated by an LLM to measure consistency and identify potential hallucinations. Sampling Responses: By sampling multiple responses, SelfCheckGPT can detect inconsistencies and contradictions in the generated text. Utilizing Question Answering: SelfCheckGPT incorporates question answering to assess the consistency of information by generating multiple-choice questions and evaluating the answers. Entropy-based Metrics: It uses entropy-based metrics to analyze the probability distribution of words in the generated text, providing insights into the reliability of the information. Zero-resource Approach: SelfCheckGPT is a zero-resource approach that does not rely on external databases, making it applicable to black-box LLMs ( and that is the exact reason I like it )

  • RefChecker

    RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.

  • RefChecker operates through a 3-stage pipeline: 1. Triplets Extraction: Utilizes LLMs to break down text into knowledge triplets for detailed analysis. 2. Checker Stage: Predicts hallucination labels on the extracted triplets using LLM-based or NLI-based checkers. 3. Aggregation: Combines individual triplet-level results to determine the overall hallucination label for the input text based on predefined rules. Additionally, RefChecker includes a human labeling tool, a search engine for Zero Context settings, and a localization model to map knowledge triplets back to reference snippets for comprehensive analysis. Triplets in the context of RefChecker refer to knowledge units extracted from text using Large Language Models (LLMs). These triplets consist of three elements that capture essential information from the text. The extraction of triplets helps in finer-grained detection and evaluation of claims by breaking down the original text into structured components for analysis. The triplets play a crucial role in detecting hallucinations and assessing the factual accuracy of claims made by language models. RefChecker includes support for various Large Language Models (LLMs) that can be used locally for processing and analysis. Some of the popular LLMs supported by RefChecker include GPT4, GPT-3.5-Turbo, InstructGPT, Falcon, Alpaca, LLaMA2, and Claude 2. These models can be utilized within the RefChecker framework for tasks such as response generation, claim extraction, and hallucination detection without the need for external connections to cloud-based services. I did not use it as it requires integration with several other providers or a large GPU for Mistral model. But this looks very promising and In future I will come back to this one (depends on how much I want to spend on GPU for my open source project)

  • factool

    FacTool: Factuality Detection in Generative AI

  • FACTOOL is a task and domain-agnostic framework designed to tackle the escalating challenge of factual error detection in generative AI. It is a five-step tool-augmented framework that consists of claim extraction, query generation, tool querying, evidence collection, and verification. FACTOOL uses tools like Google Search, Google Scholar, code interpreters, Python, and even LLMs themselves to detect factual errors in knowledge-based QA, code generation, math problem solving, and scientific literature review writing. It outperforms all other baselines across all scenarios and is shown to be highly robust in performing its specified tasks compared to LLMs themselves.

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Launch HN: Danswer (YC W24) – Open-source AI search and chat over private data

    5 projects | news.ycombinator.com | 22 Feb 2024
  • Went down the rabbit hole of 100% local RAG, it works but are there better options?

    5 projects | /r/LocalLLaMA | 6 Dec 2023
  • Ask HN: Best Alternatives to OpenAI ChatGPT?

    2 projects | news.ycombinator.com | 22 Nov 2023
  • Inflection-2: the next step up

    1 project | news.ycombinator.com | 22 Nov 2023
  • LLMs by Hallucination Rate

    1 project | /r/patient_hackernews | 20 Nov 2023