llm-humaneval-benchmarks
can-ai-code
llm-humaneval-benchmarks | can-ai-code | |
---|---|---|
10 | 30 | |
83 | 451 | |
- | - | |
4.9 | 9.5 | |
11 months ago | 5 days ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-humaneval-benchmarks
-
LLaMA2 Chat 70B outperformed ChatGPT
You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.
While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.
-
Claude 2
Since I've been on a AI code-helper kick recently. According to the post, Claude 2 now 71.2%, a significant upgrade from 1.3 (56.0%). It isn't specified whether this is pass@1 or pass@10.
For comparison:
* GPT-4 claims 85.4 on HumanEval, in a recent paper https://arxiv.org/pdf/2303.11366.pdf GPT-4 was tested at 80.1 pass@1 and 91 pass@1 using their Reflexion technique. They also include MBPP and Leetcode Hard benchmark comparisons
* WizardCoder, a StarCoder fine-tune is one of the top open models, scoring a 57.3 pass@1, model card here: https://huggingface.co/WizardLM/WizardCoder-15B-V1.0
* The best open model I know of atm is replit-code-instruct-glaive, a replit-code-3b fine tune, which scores a 63.5% pass@1. An independent developer abacaj has reproduced that announcement as part of code-eval, a repo for getting human-eval results: https://github.com/abacaj/code-eval
Those interested in this area may also want to take a look at this repo https://github.com/my-other-github-account/llm-humaneval-ben... that also ranks with Eval+, the CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... and airate https://github.com/catid/supercharger/tree/main/airate
Also, as with all LLM evals, to be taken with a grain of salt...
Liu, Jiawei, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. “Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation.” arXiv, June 12, 2023. https://doi.org/10.48550/arXiv.2305.01210.
-
Which LLM works for taboo questions or programming like webscraping?
To get an idea of programming performance, my can-ai-code Leaderboard is freshly updated this morning, but also check out the excellent llm-eval and code-eval leaderboards.
-
Official WizardCoder-15B-V1.0 Released! Can Achieve 59.8% Pass@1 on HumanEval!
❗Note: In this study, we copy the scores for HumanEval and HumanEval+ from the LLM-Humaneval-Benchmarks. Notably, all the mentioned models generate code solutions for each problem utilizing a single attempt, and the resulting pass rate percentage is reported. Our WizardCoder generates answers using greedy decoding and tests with the same code.
-
Hi folks, back with an update to the HumanEval+ programming ranking I posted the other day incorporating your feedback - and some closed models for comparison! Now has improved generation params, new models: Falcon, Starcoder, Codegen, Claude+, Bard, OpenAssistant and more
I switched to RunPod from SageMaker in the middle of this process and boy am I happy I did. It is way cheaper and easier to scale for a project like this, and I highly recommend it. I have a set of tooling to run tests on it en masse now I am happy with - I will try to get my work up on the Github soon!: https://github.com/my-other-github-account/llm-humaneval-benchmarks
- All Model Leaderboards (that I know)
-
Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark!
Also, my code I used for this eval is up at https://github.com/my-other-github-account/llm-humaneval-benchmarks/tree/8f3a77eb3508f33a88699aac1c4b10d5e3dc7de1
can-ai-code
-
Ask HN: Code Llama 70B on a dedicated server
You can run a Q4 quant of a 70B model in about 40GB of RAM (+context). You're single user (batch size 1, bs=1) inference speed will be basically memory bottlenecked, so on a dual channel dedicated box you'd expect somewhere about 1 token/s. That's inference, prefill/prompt generation will take even longer (as your chat history grows) on CPU. So falls into the realm of technically possible, but not for real world use.
If you're looking specifically for CodeLlama 70B, Artificial Analysis https://artificialanalysis.ai/models/codellama-instruct-70b/... lists Perplexity, Together.ai, Deep Infra, and Fireworks as potential hosts, with Together.ai and Deepinfra at about $0.9/1M tokens, with about 30 tokens/s and about 300ms latency (time to first token).
For those looking for local coding models in specifically. I keep a list of LLM coding evals here: https://llm-tracker.info/evals/Code-Evaluation
On the EvalPlus Leaderboard, there about about 10 open models that rank higher than CodeLlama 70B, all smaller models: https://evalplus.github.io/leaderboard.html
A few other evals (worth cross-referencing to counter contamination, overfitting):
* CRUXEval Leaderboard https://crux-eval.github.io/leaderboard.html
* CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...
* Big Code Models Leaderboard https://huggingface.co/spaces/bigcode/bigcode-models-leaderb...
From the various leaderboards, deepseek-ai/deepseek-coder-33b-instruct still looks like the best performing open model (it has a very liberal ethical license), followed by ise-uiuc/Magicoder-S-DS-6.7B (a deepseek-coder-6.7b-base fine tune). The former can be run as a Q4 quant on a single 24GB GPU (a used 3090 should run you about $700 atm), and the latter, if it works for you will run 4X faster and fit on even cheaper/weaker GPUs.
There's always recent developments, but two worth pointing out:
OpenCodeInterpreter - a new system that uses execution feedback and outperforms ChatGPT4 Code Interpreter that is fine-tuned off of the DeepSeek code models: https://opencodeinterpreter.github.io/
StarCoder2-15B just dropped and also looks competitive. Announcement and relevant links: https://huggingface.co/blog/starcoder2
-
Meta AI releases Code Llama 70B
This is a completely fair, but open question. Not to be a typical HN user, but when you say SOTA local, the question is really what benchmarks do you really care about in order to evaluate. Size, operability, complexity, explainability etc.
Working out what copilot models perform best has been a deep exercise for myself and has really made me evaluate my own coding style on what I find important and things I look out for when investigating models and evaluating interview candidates.
I think three benchmarks & leaderboards most go to are:
https://huggingface.co/spaces/bigcode/bigcode-models-leaderb... - which is the most understood, broad language capability leaderboad that relies on well understood evaluations and benchmarks.
https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... - Also comprehensive, but primarily assesses Python and JavaScript.
https://evalplus.github.io/leaderboard.html - which I think is a better take on comparing models you intend to run locally as you can evaluate performance, operability and size in one visualisation.
Best of luck and I would love to know which models & benchmarks you choose and why.
-
Stable Code 3B: Coding on the Edge
Here is a leader board of some models
https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...
Don't know how biased this leaderboard is, but I guess you could just give some of them a try and see for yourself.
-
Mistral has an even more powerfull model in the prototype-phase
- Can AI Code? - https://huggingface.co/spaces/mike-ravkine/can-ai-code-results
-
Assessing llms for code generation.
Check out https://github.com/the-crypt-keeper/can-ai-code for some ideas. I'd love to see more shootouts like this. Especially if they were spread out among a few different languages.
-
Show HN: LlamaGPT – Self-hosted, offline, private AI chatbot, powered by Llama 2
Very cool, this looks like a combination of chatbot-ui and llama-cpp-python? A similar project I've been using is https://github.com/serge-chat/serge. Nous-Hermes-Llama2-13b is my daily driver and scores high on coding evaluations (https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...).
-
How Is LLaMa.cpp Possible?
I have several sets of quant comparisons posted on my HF spaces, the caveat is my prompts are all "English to code": https://huggingface.co/spaces/mike-ravkine/can-ai-code-compa...
The dropdown at the top selects which comparison: Falcon compares GGML, Vicuna compares bits and bytes. I have some more comparisons planned, feel free to open an issue if you'd like to see something specific: https://github.com/the-crypt-keeper/can-ai-code
-
Ask HN: Who is using small OS LLMs in production?
Yeah it seemed suspiciously high for HumanEval and it only ranks 14th for JS and 7th for Python on other benchmarks now: https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...
WizardCoder is a bit of a problem since it's not llama 1/2 based but is its own 15B model and as such the support for it in anything practical is near nonexistent. WizardLM v1.2 looks like it may be worth checking out.
-
Recent updates on the LLM Explorer (15,000+ LLMs listed)
There are at least 4 different types of quants floating around HF (bitsandbytes, GGML, GPTQ and AWQ) so I dont know if a "GGML" column makes sense vs a more abstract way of linking quants to their base models. I am doing this and its fucking awful: https://github.com/the-crypt-keeper/can-ai-code/blob/main/models/models.yaml
-
Did anyone try to benchmark LLM's for coding against each other and against proprietary ones like Copilot X?
Ah I meant this one but I see now it's WIP.
What are some alternatives?
code-eval - Run evaluation on LLMs using human-eval benchmark
WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
openchat - OpenChat: Advancing Open-source Language Models with Imperfect Data
llm-leaderboard - A joint community effort to create one central leaderboard for LLMs.
Local-LLM-Comparison-Colab-UI - Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
ggml - Tensor library for machine learning
llm-mlc - LLM plugin for running models using MLC
poe-api - [UNMAINTAINED] A reverse engineered Python API wrapper for Quora's Poe, which provides free access to ChatGPT, GPT-4, and Claude.