opencompass VS evals

Compare opencompass vs evals and see what are their differences.

opencompass

OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets. (by open-compass)

evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks. (by openai)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
opencompass evals
1 49
2,699 14,048
17.8% 3.3%
9.7 9.3
2 days ago 7 days ago
Python Python
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

opencompass

Posts with mentions or reviews of opencompass. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-13.
  • Show HN: Times faster LLM evaluation with Bayesian optimization
    6 projects | news.ycombinator.com | 13 Feb 2024
    Fair question.

    Evaluate refers to the phase after training to check if the training is good.

    Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!

    So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.

evals

Posts with mentions or reviews of evals. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-13.

What are some alternatives?

When comparing opencompass and evals you can also consider the following projects:

lm-evaluation-harness - A framework for few-shot evaluation of language models.

gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs

deepeval - The LLM Evaluation Framework

promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.

promptbench - A unified evaluation framework for large language models

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

bocoel - Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few lines of modular code.

gpt4free - The official gpt4free repository | various collection of powerful language models

clownfish - Constrained Decoding for LLMs against JSON Schema

BIG-bench - Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models

langkit - 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.