OpenFactVerification
Loki: Open-source solution designed to automate the process of verifying factuality (by Libr-AI)
FActScore
A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation" (by shmsw25)
OpenFactVerification | FActScore | |
---|---|---|
6 | 1 | |
891 | 219 | |
4.3% | - | |
8.1 | 6.4 | |
6 days ago | 3 months ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OpenFactVerification
Posts with mentions or reviews of OpenFactVerification.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-06.
- Show HN: Loki Needs You – Collaborate on an Open-Source Fact-Checking AI
-
An Open Source Tool for Multimodal Fact Verification
Hello vinni2, thank you for mentioning the paper. However, I noticed that it hasn't gone through peer review yet. Also, the paper suggests that fine-tuning may work better than in-context learning, but that's not a problem. You can fine-tune any LLMs like GPT-3.5 for this purpose and use them with this framework. Once you have fine-tuned GPT, for example, with specific data, you'll only need to modify the model name (https://github.com/Libr-AI/OpenFactVerification/blob/8fd1da9...). I believe this approach can lead to better results than what the paper suggests.
FActScore
Posts with mentions or reviews of FActScore.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Long-form factuality in large language models
Looks like a slight modification of FActScore [1], but instead of using Wikipedia as a verification source, they use Google Search. They also claim to include a wider range of topics. That said, FActScore allows you to use whatever knowledge source and topics you want [2].
[1]: https://arxiv.org/abs/2305.14251
[2]: https://github.com/shmsw25/FActScore?tab=readme-ov-file#to-u...