llm-search VS alpaca_eval

Compare llm-search vs alpaca_eval and see what are their differences.

alpaca_eval

An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast. (by tatsu-lab)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
llm-search alpaca_eval
2 4
396 1,175
- 13.1%
8.5 9.6
12 days ago 1 day ago
Jupyter Notebook Jupyter Notebook
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

llm-search

Posts with mentions or reviews of llm-search. We have used some of these posts to build our list of alternatives and similar projects.

alpaca_eval

Posts with mentions or reviews of alpaca_eval. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-28.
  • UltraLM-13B reaches top of AlpacaEval leaderboard
    3 projects | /r/LocalLLaMA | 28 Jun 2023
    Alpaca Eval is open source and was developed by the same team who trained the alpaca model afaik. It is not like what you said in the other comment
  • [P] AlpacaEval : An Automatic Evaluator for Instruction-following Language Models
    2 projects | /r/LocalLLaMA | 8 Jun 2023
    I have been going deep in this space for my can-ai-code project and was looking at the config that WizardLM was run with: https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/models_configs/wizardlm-13b/configs.yaml
    2 projects | /r/MachineLearning | 8 Jun 2023
    an automatic evaluator that is easy to use, fast, cheap and validated against 20K human annotations. It actually has a higher agreement with majority vote of humans than a single human annotator! Of course, our method still has limitations which we discuss here!

What are some alternatives?

When comparing llm-search and alpaca_eval you can also consider the following projects:

DeepLearningExamples - State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.

Local-LLM-Comparison-Colab-UI - Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.

ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models

EasyEdit - An Easy-to-use Knowledge Editing Framework for LLMs.

FinGPT - FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.

alpaca_farm - A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.

anything-llm - The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.

medmcqa - A large-scale (194k), Multiple-Choice Question Answering (MCQA) dataset designed to address realworld medical entrance exam questions.