RefChecker
hallucination-leaderboard
RefChecker | hallucination-leaderboard | |
---|---|---|
1 | 14 | |
213 | 1,084 | |
2.8% | 5.0% | |
7.6 | 8.7 | |
12 days ago | about 1 month ago | |
Python | ||
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RefChecker
-
How to Detect AI Hallucinations
RefChecker operates through a 3-stage pipeline: 1. Triplets Extraction: Utilizes LLMs to break down text into knowledge triplets for detailed analysis. 2. Checker Stage: Predicts hallucination labels on the extracted triplets using LLM-based or NLI-based checkers. 3. Aggregation: Combines individual triplet-level results to determine the overall hallucination label for the input text based on predefined rules. Additionally, RefChecker includes a human labeling tool, a search engine for Zero Context settings, and a localization model to map knowledge triplets back to reference snippets for comprehensive analysis. Triplets in the context of RefChecker refer to knowledge units extracted from text using Large Language Models (LLMs). These triplets consist of three elements that capture essential information from the text. The extraction of triplets helps in finer-grained detection and evaluation of claims by breaking down the original text into structured components for analysis. The triplets play a crucial role in detecting hallucinations and assessing the factual accuracy of claims made by language models. RefChecker includes support for various Large Language Models (LLMs) that can be used locally for processing and analysis. Some of the popular LLMs supported by RefChecker include GPT4, GPT-3.5-Turbo, InstructGPT, Falcon, Alpaca, LLaMA2, and Claude 2. These models can be utilized within the RefChecker framework for tasks such as response generation, claim extraction, and hallucination detection without the need for external connections to cloud-based services. I did not use it as it requires integration with several other providers or a large GPU for Mistral model. But this looks very promising and In future I will come back to this one (depends on how much I want to spend on GPU for my open source project)
hallucination-leaderboard
-
How to Detect AI Hallucinations
To checkout the Hallucination leaderboard click here
-
Launch HN: Danswer (YC W24) – Open-source AI search and chat over private data
Nice to see yet another open source approach to LLM/RAG. For those who do not want to meddle with the complexity of do-it-youself, Vectara (https://vectara.com) provides a RAG-as-a-service approach - pretty helpful if you want to stay away from having to worry about all the details, scalability, security, etc - and just focus on building your RAG application.
-
Went down the rabbit hole of 100% local RAG, it works but are there better options?
Check this leaderboard, it is specific for RAG use case: https://github.com/vectara/hallucination-leaderboard
-
Which LLM framework(s) do you use in production and why?
You should also check us out (https://vectara.com) - we provide RAG as a service so you don't have to do all the heavy lifting and putting together the pieces yourself.
-
Ask HN: Best Alternatives to OpenAI ChatGPT?
Llama 2 (and variants). Has the lowest hallucination rate (https://github.com/vectara/hallucination-leaderboard), and its open source and so we know what went into it, and the community can improve it
-
Inflection-2: the next step up
This is just typical of so much work in the field. They pick and choose which models to compare against and on which benchmarks. If this model was truly great, they would be comparing against Claude 2 and GPT4 across a bunch of different benchmarks. Instead they compare against Palm 2, which in a lot of tests is a weak model (https://venturebeat.com/ai/google-bard-fails-to-deliver-on-i....) and prone to hallucination (https://github.com/vectara/hallucination-leaderboard).
- LLMs by Hallucination Rate
What are some alternatives?
Woodpecker - ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.
SuperAGI - <⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
nohide - editors that don't really delete
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
YiVal - Your Automatic Prompt Engineering Assistant for GenAI Applications
awesome-generative-ai - A curated list of modern Generative Artificial Intelligence projects and services
ChatGPT-Prompts - ChatGPT and Bing AI prompt curation
awesome-generative-deep-art - A curated list of Generative AI tools, works, models, and references [Moved to: https://github.com/filipecalegario/awesome-generative-ai]
amazon-bedrock-with-builder-and-command-patterns - A simple, yet powerful implementation in Java that allows developers to write a rather straightforward code to create the API requests for the different foundation models supported by Amazon Bedrock.