llm-humaneval-benchmarks
llm-leaderboard
llm-humaneval-benchmarks | llm-leaderboard | |
---|---|---|
10 | 6 | |
83 | 270 | |
- | - | |
4.9 | 7.8 | |
11 months ago | 9 months ago | |
Jupyter Notebook | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-humaneval-benchmarks
-
LLaMA2 Chat 70B outperformed ChatGPT
You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.
While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.
-
Claude 2
Since I've been on a AI code-helper kick recently. According to the post, Claude 2 now 71.2%, a significant upgrade from 1.3 (56.0%). It isn't specified whether this is pass@1 or pass@10.
For comparison:
* GPT-4 claims 85.4 on HumanEval, in a recent paper https://arxiv.org/pdf/2303.11366.pdf GPT-4 was tested at 80.1 pass@1 and 91 pass@1 using their Reflexion technique. They also include MBPP and Leetcode Hard benchmark comparisons
* WizardCoder, a StarCoder fine-tune is one of the top open models, scoring a 57.3 pass@1, model card here: https://huggingface.co/WizardLM/WizardCoder-15B-V1.0
* The best open model I know of atm is replit-code-instruct-glaive, a replit-code-3b fine tune, which scores a 63.5% pass@1. An independent developer abacaj has reproduced that announcement as part of code-eval, a repo for getting human-eval results: https://github.com/abacaj/code-eval
Those interested in this area may also want to take a look at this repo https://github.com/my-other-github-account/llm-humaneval-ben... that also ranks with Eval+, the CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... and airate https://github.com/catid/supercharger/tree/main/airate
Also, as with all LLM evals, to be taken with a grain of salt...
Liu, Jiawei, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. “Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation.” arXiv, June 12, 2023. https://doi.org/10.48550/arXiv.2305.01210.
-
Which LLM works for taboo questions or programming like webscraping?
To get an idea of programming performance, my can-ai-code Leaderboard is freshly updated this morning, but also check out the excellent llm-eval and code-eval leaderboards.
-
Official WizardCoder-15B-V1.0 Released! Can Achieve 59.8% Pass@1 on HumanEval!
❗Note: In this study, we copy the scores for HumanEval and HumanEval+ from the LLM-Humaneval-Benchmarks. Notably, all the mentioned models generate code solutions for each problem utilizing a single attempt, and the resulting pass rate percentage is reported. Our WizardCoder generates answers using greedy decoding and tests with the same code.
-
Hi folks, back with an update to the HumanEval+ programming ranking I posted the other day incorporating your feedback - and some closed models for comparison! Now has improved generation params, new models: Falcon, Starcoder, Codegen, Claude+, Bard, OpenAssistant and more
I switched to RunPod from SageMaker in the middle of this process and boy am I happy I did. It is way cheaper and easier to scale for a project like this, and I highly recommend it. I have a set of tooling to run tests on it en masse now I am happy with - I will try to get my work up on the Github soon!: https://github.com/my-other-github-account/llm-humaneval-benchmarks
- All Model Leaderboards (that I know)
-
Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark!
Also, my code I used for this eval is up at https://github.com/my-other-github-account/llm-humaneval-benchmarks/tree/8f3a77eb3508f33a88699aac1c4b10d5e3dc7de1
llm-leaderboard
-
Email Obfuscation Rendered Almost Ineffective Against ChatGPT
This is assuming you’re using a really big LLM behind a paid service. There are plenty of smaller open source models. Not sure at what point it’s not “large” but when fine tuned they are capable of matching the largest LLM in performance on narrow tasks.
Some of these open source models can even be run on your local machine. It’d be very inexpensive to run thousands of pages through it.
https://llm-leaderboard.streamlit.app/
-
Is the ChatGPT and Bing AI boom already over?
palm-2-l-instruct scores 0.909 on Winogrande few-shot.
https://github.com/LudwigStumpp/llm-leaderboard/blob/main/RE...
-
Meta is preparing to launch a new open source coding model, dubbed Code Llama, that may release as soon as next week
They said it "rivals OpenAI’s Codex model" which performs worse than starcoder-16b on HumanEval-Python (pass@1) according to https://github.com/LudwigStumpp/llm-leaderboard
- All Model Leaderboards (that I know)
-
GPT-3.5 and GPT-4 performance in Open LLM Leaderboard tests?
Yes, see this leaderboard for a comparison: https://llm-leaderboard.streamlit.app/
-
Sharing my comparison methodology for LLM models
So I've tried to use a basic matrix factorization method to estimate unknown benchmark scores for models based on the known benchmark scores. Basically, I assume each model has some intrinsic "quality" score, and the known benchmarks are assumed to be a linear function of the quality score. This is similar to matrix factorization with only 1 latent factor (though the bias values have to handled differently). Then I fit the known benchmark scores from https://github.com/LudwigStumpp/llm-leaderboard to my parameters, and estimate the remaining benchmark scores.
What are some alternatives?
can-ai-code - Self-evaluating interview for AI coders
chain-of-thought-hub - Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
code-eval - Run evaluation on LLMs using human-eval benchmark
EvalAI - :cloud: :rocket: :bar_chart: :chart_with_upwards_trend: Evaluating state of the art in AI
WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
searchGPT - Grounded search engine (i.e. with source reference) based on LLM / ChatGPT / OpenAI API. It supports web search, file content search etc.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
ggml - Tensor library for machine learning
unilm - Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
poe-api - [UNMAINTAINED] A reverse engineered Python API wrapper for Quora's Poe, which provides free access to ChatGPT, GPT-4, and Claude.
alpa - Training and serving large-scale neural networks with auto parallelization.