supercharger
llm-humaneval-benchmarks
supercharger | llm-humaneval-benchmarks | |
---|---|---|
13 | 10 | |
346 | 83 | |
- | - | |
6.6 | 4.9 | |
about 1 year ago | 11 months ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
supercharger
-
Claude 2
Since I've been on a AI code-helper kick recently. According to the post, Claude 2 now 71.2%, a significant upgrade from 1.3 (56.0%). It isn't specified whether this is pass@1 or pass@10.
For comparison:
* GPT-4 claims 85.4 on HumanEval, in a recent paper https://arxiv.org/pdf/2303.11366.pdf GPT-4 was tested at 80.1 pass@1 and 91 pass@1 using their Reflexion technique. They also include MBPP and Leetcode Hard benchmark comparisons
* WizardCoder, a StarCoder fine-tune is one of the top open models, scoring a 57.3 pass@1, model card here: https://huggingface.co/WizardLM/WizardCoder-15B-V1.0
* The best open model I know of atm is replit-code-instruct-glaive, a replit-code-3b fine tune, which scores a 63.5% pass@1. An independent developer abacaj has reproduced that announcement as part of code-eval, a repo for getting human-eval results: https://github.com/abacaj/code-eval
Those interested in this area may also want to take a look at this repo https://github.com/my-other-github-account/llm-humaneval-ben... that also ranks with Eval+, the CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... and airate https://github.com/catid/supercharger/tree/main/airate
Also, as with all LLM evals, to be taken with a grain of salt...
Liu, Jiawei, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. “Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation.” arXiv, June 12, 2023. https://doi.org/10.48550/arXiv.2305.01210.
- Let's be honest: none of the models can code well
-
April 2023
Leverage locally-hosted Large Language Models to write software + unit tests (https://github.com/catid/supercharger)
- What coding llm is the best?
-
Is there such a thing as local Llamas integrated into VSCode?
supercharger Write Software + unit tests for you, based on Baize-30B 8bit, using model parallelism
- I have a project in my own programming language, abusing both lexical and syntactic macros. I want to do a refactoring tasks on it. I don't have a GPU, but 14-core CPU. Should I pay for cloud or there are local ways to do such task on my laptop? Which model is better for programming?
- What is the best open source model/program to help index and debug code?
- Leverage locally-hosted Large Language Models to write software and unit tests
-
Can LLMs do static code analysis?
Added support for 65B LLaMa model to https://github.com/catid/supercharger tonight. It runs faster than Baize 30B (maybe due to lack of adapter) and only slightly slower than Galpaca 30B. Benchmarks here: https://docs.google.com/spreadsheets/d/1TYBNr_UPJ7wCzJThuk5ysje7K1x-_62JhBeXDbmrjA8/edit?usp=sharing
-
Benchmarks for LLMs on Consumer Hardware
Here's the code that loads it: https://github.com/catid/supercharger/blob/main/server/model_koala.py
llm-humaneval-benchmarks
-
LLaMA2 Chat 70B outperformed ChatGPT
You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.
While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.
-
Claude 2
Since I've been on a AI code-helper kick recently. According to the post, Claude 2 now 71.2%, a significant upgrade from 1.3 (56.0%). It isn't specified whether this is pass@1 or pass@10.
For comparison:
* GPT-4 claims 85.4 on HumanEval, in a recent paper https://arxiv.org/pdf/2303.11366.pdf GPT-4 was tested at 80.1 pass@1 and 91 pass@1 using their Reflexion technique. They also include MBPP and Leetcode Hard benchmark comparisons
* WizardCoder, a StarCoder fine-tune is one of the top open models, scoring a 57.3 pass@1, model card here: https://huggingface.co/WizardLM/WizardCoder-15B-V1.0
* The best open model I know of atm is replit-code-instruct-glaive, a replit-code-3b fine tune, which scores a 63.5% pass@1. An independent developer abacaj has reproduced that announcement as part of code-eval, a repo for getting human-eval results: https://github.com/abacaj/code-eval
Those interested in this area may also want to take a look at this repo https://github.com/my-other-github-account/llm-humaneval-ben... that also ranks with Eval+, the CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... and airate https://github.com/catid/supercharger/tree/main/airate
Also, as with all LLM evals, to be taken with a grain of salt...
Liu, Jiawei, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. “Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation.” arXiv, June 12, 2023. https://doi.org/10.48550/arXiv.2305.01210.
-
Which LLM works for taboo questions or programming like webscraping?
To get an idea of programming performance, my can-ai-code Leaderboard is freshly updated this morning, but also check out the excellent llm-eval and code-eval leaderboards.
-
Official WizardCoder-15B-V1.0 Released! Can Achieve 59.8% Pass@1 on HumanEval!
❗Note: In this study, we copy the scores for HumanEval and HumanEval+ from the LLM-Humaneval-Benchmarks. Notably, all the mentioned models generate code solutions for each problem utilizing a single attempt, and the resulting pass rate percentage is reported. Our WizardCoder generates answers using greedy decoding and tests with the same code.
-
Hi folks, back with an update to the HumanEval+ programming ranking I posted the other day incorporating your feedback - and some closed models for comparison! Now has improved generation params, new models: Falcon, Starcoder, Codegen, Claude+, Bard, OpenAssistant and more
I switched to RunPod from SageMaker in the middle of this process and boy am I happy I did. It is way cheaper and easier to scale for a project like this, and I highly recommend it. I have a set of tooling to run tests on it en masse now I am happy with - I will try to get my work up on the Github soon!: https://github.com/my-other-github-account/llm-humaneval-benchmarks
- All Model Leaderboards (that I know)
-
Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark!
Also, my code I used for this eval is up at https://github.com/my-other-github-account/llm-humaneval-benchmarks/tree/8f3a77eb3508f33a88699aac1c4b10d5e3dc7de1
What are some alternatives?
developer - the first library to let you embed a developer agent in your own app!
can-ai-code - Self-evaluating interview for AI coders
gptest - GPTest VS Code Extension
code-eval - Run evaluation on LLMs using human-eval benchmark
walter - AI-powered software development assistant built right into GitHub so it can act as your junior developer.
WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
evaporate - This repo contains data and code for the paper "Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes"
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
locai - Connect to Kobold API through VS Code
llm-leaderboard - A joint community effort to create one central leaderboard for LLMs.
Flowise - Drag & drop UI to build your customized LLM flow
ggml - Tensor library for machine learning