lm-evaluation-harness

A framework for few-shot evaluation of language models. (by EleutherAI)

Lm-evaluation-harness Alternatives

Similar projects and alternatives to lm-evaluation-harness

  1. Killed by Google

    Part guillotine, part graveyard for Google's doomed apps, services, and hardware.

  2. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  3. text-generation-webui

    A Gradio web UI for Large Language Models with support for multiple inference backends.

  4. llama.cpp

    LLM inference in C/C++

  5. transformers

    🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

  6. koboldcpp

    Run GGUF models easily with a KoboldAI UI. One File. Zero Install.

  7. gpt-neo

    Discontinued An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

  8. ggml

    Tensor library for machine learning

  9. gpt-neox

    An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

  10. BIG-bench

    Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models

  11. mesh-transformer-jax

    Model parallel transformers in JAX and Haiku

  12. StableLM

    43 lm-evaluation-harness VS StableLM

    StableLM: Stability AI Language Models

  13. flash-attention

    Fast and memory-efficient exact attention

  14. mach-nix

    Create highly reproducible python environments

  15. aitextgen

    A robust Python tool for text-based AI training and generation using GPT-2.

  16. allennlp

    Discontinued An open-source NLP research library, built on PyTorch.

  17. kiri

    Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models. (by kiri-ai)

  18. dm-haiku

    JAX-based neural network library

  19. opencompass

    OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.

  20. cedille-ai

    ✒️ Cedille is a large French language model (6B), released under an open-source license

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better lm-evaluation-harness alternative or higher similarity.

lm-evaluation-harness discussion

Log in or Post with

lm-evaluation-harness reviews and mentions

Posts with mentions or reviews of lm-evaluation-harness. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-12-21.
  • Generative AI: A Personal Deep Dive – My Notes and Insights Part-2
    7 projects | dev.to | 21 Dec 2024
    A framework for few-shot evaluation of language models.
  • Mistral AI Launches New 8x22B Moe Model
    4 projects | news.ycombinator.com | 9 Apr 2024
    The easiest is to use vllm (https://github.com/vllm-project/vllm) to run it on a Couple of A100's, and you can benchmark this using this library (https://github.com/EleutherAI/lm-evaluation-harness)
  • Show HN: Times faster LLM evaluation with Bayesian optimization
    6 projects | news.ycombinator.com | 13 Feb 2024
    Fair question.

    Evaluate refers to the phase after training to check if the training is good.

    Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!

    So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.

  • Language Model Evaluation Harness
    1 project | news.ycombinator.com | 25 Nov 2023
  • Best courses / tutorials on open-source LLM finetuning
    1 project | /r/LLMDevs | 10 Jul 2023
    I haven't run this yet, but I'm aware of Eleuther AI's evaluation harness EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of autoregressive language models. (github.com) and GPT-4 -based evaluations like lm-sys/FastChat: An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and FastChat-T5. (github.com)
  • Orca-Mini-V2-13b
    1 project | /r/LocalLLaMA | 9 Jul 2023
    Updates: Just finished final evaluation (additional metrics) on https://github.com/EleutherAI/lm-evaluation-harness and have averaged the results for orca-mini-v2-13b. The average results for the Open LLM Leaderboard are not that great, compare to initial metrics. The average is now 0.54675 which put this model below then many other 13b out there.
  • My largest ever quants, GPT 3 sized! BLOOMZ 176B and BLOOMChat 1.0 176B
    6 projects | /r/LocalLLaMA | 6 Jul 2023
    Hey u/The-Bloke Appreciate the quants! What is the degradation on the some benchmarks. Have you seen https://github.com/EleutherAI/lm-evaluation-harness. 3-bit and 2-bit quant will really be pushing it. I don't see a ton of evaluation results on the quants and nice to see a before and after.
  • Dataset of MMLU results broken down by task
    2 projects | /r/datasets | 6 Jul 2023
    I am primarily looking for results of running the MMLU evaluation on modern large language models. I have been able to find some data here https://github.com/EleutherAI/lm-evaluation-harness/tree/master/results and will be asking them if/when, they can provide any additional data.
  • Orca-Mini-V2-7b
    1 project | /r/LocalLLaMA | 3 Jul 2023
    I evaluated orca_mini_v2_7b on a wide range of tasks using Language Model Evaluation Harness from EleutherAI.
  • Why Falcon 40B managed to beat LLaMA 65B?
    1 project | /r/datascience | 19 Jun 2023
  • A note from our sponsor - SaaSHub
    www.saashub.com | 20 Jan 2025
    SaaSHub helps you find the best software and product alternatives Learn more →

Stats

Basic lm-evaluation-harness repo stats
35
7,443
9.6
10 days ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com

Did you know that Python is
the 2nd most popular programming language
based on number of references?