RefChecker
Woodpecker
RefChecker | Woodpecker | |
---|---|---|
1 | 2 | |
213 | 563 | |
2.8% | - | |
7.6 | 8.9 | |
12 days ago | 5 months ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RefChecker
-
How to Detect AI Hallucinations
RefChecker operates through a 3-stage pipeline: 1. Triplets Extraction: Utilizes LLMs to break down text into knowledge triplets for detailed analysis. 2. Checker Stage: Predicts hallucination labels on the extracted triplets using LLM-based or NLI-based checkers. 3. Aggregation: Combines individual triplet-level results to determine the overall hallucination label for the input text based on predefined rules. Additionally, RefChecker includes a human labeling tool, a search engine for Zero Context settings, and a localization model to map knowledge triplets back to reference snippets for comprehensive analysis. Triplets in the context of RefChecker refer to knowledge units extracted from text using Large Language Models (LLMs). These triplets consist of three elements that capture essential information from the text. The extraction of triplets helps in finer-grained detection and evaluation of claims by breaking down the original text into structured components for analysis. The triplets play a crucial role in detecting hallucinations and assessing the factual accuracy of claims made by language models. RefChecker includes support for various Large Language Models (LLMs) that can be used locally for processing and analysis. Some of the popular LLMs supported by RefChecker include GPT4, GPT-3.5-Turbo, InstructGPT, Falcon, Alpaca, LLaMA2, and Claude 2. These models can be utilized within the RefChecker framework for tasks such as response generation, claim extraction, and hallucination detection without the need for external connections to cloud-based services. I did not use it as it requires integration with several other providers or a large GPU for Mistral model. But this looks very promising and In future I will come back to this one (depends on how much I want to spend on GPU for my open source project)
Woodpecker
-
shinning the spotlight on CogVLM
Woodpecker: Hallucination Correction for Multimodal Large Language Models https://github.com/BradyFU/Woodpecker
- Woodpecker: Hallucination Correction for Multimodal Large Language Models
What are some alternatives?
hallucination-leaderboard - Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
unilm - Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Qwen - The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
GPT4RoI - GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest
deeplake - Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
LLMSurvey - The official GitHub page for the survey paper "A Survey of Large Language Models".
CogVLM - a state-of-the-art-level open visual language model | 多模态预训练模型