test

Measuring Massive Multitask Language Understanding | ICLR 2021 (by hendrycks)

Test Alternatives

Similar projects and alternatives to test

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better test alternative or higher similarity.

test reviews and mentions

Posts with mentions or reviews of test. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • Mixtral 7B MoE beats LLaMA2 70B in MMLU
    2 projects | /r/LocalLLaMA | 10 Dec 2023
    Sources [1] MMLU Benchmark (Multi-task Language Understanding) | Papers With Code https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu [2] MMLU Dataset | Papers With Code https://paperswithcode.com/dataset/mmlu [3] hendrycks/test: Measuring Massive Multitask Language Understanding | ICLR 2021 - GitHub https://github.com/hendrycks/test [4] lukaemon/mmlu · Datasets at Hugging Face https://huggingface.co/datasets/lukaemon/mmlu [5] [2009.03300] Measuring Massive Multitask Language Understanding - arXiv https://arxiv.org/abs/2009.03300
  • Show HN: Llama-dl – high-speed download of LLaMA, Facebook's 65B GPT model
    7 projects | news.ycombinator.com | 4 Mar 2023
    Because there are many benchmarks that measure different things.

    You need to look at the benchmark that reflects your specific interest.

    So in this case ("I wasn't impressed that 30B didn't seem to know who Captain Picard was") the closest relevant benchmark they performed is MMLU (Massive Multitask Language Understanding"[1].

    In the LLAMA paper they publish a figure of 63.4% for the 5-shot average setting without fine tuning on the 65B model, and 68.9% after fine tuning. This is significantly better that the original GPT-3 (43.9% under the same conditions) but as they note:

    > "[it is] still far from the state-of-the-art, that is 77.4 for GPT code-davinci-002 on MMLU (numbers taken from Iyer et al. (2022))"

    InstructGPT[2] (which OpenAI points at as most relevant ChatGPT publication) doesn't report MMLU performance.

    [1] https://github.com/hendrycks/test

    [2] https://arxiv.org/abs/2203.02155

Stats

Basic test repo stats
8
899
2.5
11 months ago

hendrycks/test is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of test is Python.

SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com