GenRead
opencompass
GenRead | opencompass | |
---|---|---|
1 | 1 | |
265 | 2,836 | |
- | 20.2% | |
10.0 | 9.7 | |
over 1 year ago | about 14 hours ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GenRead
-
This Artificial Intelligence Research Proposes a New Method That Directly Generates Contextual Docs for a Question Instead of Retrieving External Docs
Quick Read: https://www.marktechpost.com/2023/02/08/this-artificial-intelligence-research-proposes-a-new-method-that-directly-generates-contextual-docs-for-a-question-instead-of-retrieving-external-docs/ Paper: https://arxiv.org/pdf/2209.10063.pdf Github: https://github.com/wyu97/GenRead
opencompass
-
Show HN: Times faster LLM evaluation with Bayesian optimization
Fair question.
Evaluate refers to the phase after training to check if the training is good.
Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!
So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.
What are some alternatives?
PaddleNLP - π Easy-to-use and powerful NLP and LLM library with π€ Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including πText Classification, π Neural Search, β Question Answering, βΉοΈ Information Extraction, π Document Intelligence, π Sentiment Analysis etc.
lm-evaluation-harness - A framework for few-shot evaluation of language models.
FARM - :house_with_garden: Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
deepeval - The LLM Evaluation Framework
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
promptbench - A unified evaluation framework for large language models
Questgen.ai - Question generation using state-of-the-art Natural Language Processing algorithms
bocoel - Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few lines of modular code.
simpletransformers - Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI