vibraniumdome
opencompass
vibraniumdome | opencompass | |
---|---|---|
1 | 1 | |
41 | 2,836 | |
- | 21.8% | |
9.3 | 9.7 | |
2 months ago | 3 days ago | |
Python | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vibraniumdome
opencompass
-
Show HN: Times faster LLM evaluation with Bayesian optimization
Fair question.
Evaluate refers to the phase after training to check if the training is good.
Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!
So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.
What are some alternatives?
llm-workflow-engine - Power CLI and Workflow manager for LLMs (core package)
lm-evaluation-harness - A framework for few-shot evaluation of language models.
aegis - Self-hardening firewall for large language models
deepeval - The LLM Evaluation Framework
AgentBench - A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
promptbench - A unified evaluation framework for large language models
llm-guard - The Security Toolkit for LLM Interactions
bocoel - Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few lines of modular code.
langroid - Harness LLMs with Multi-Agent Programming