code-eval
code-interpreter-packages
code-eval | code-interpreter-packages | |
---|---|---|
5 | 1 | |
356 | 31 | |
- | - | |
8.0 | 6.2 | |
9 months ago | 9 months ago | |
Python | ||
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
code-eval
-
Ask HN: LLM Leaderboard for Code Generation?
You're looking for "HumanEval" tests. Not saying this is the best way to test it, but it's the only standard test I know of that code models are compared with and are commonly benchmarked for
The current best models you'd want to try that I'm aware of is WizardCoder(15B), Starcoder(15B), and replit's code model(3B). Replit's instruct model is interesting because of it's competitive performance while only being a 3B model so it's the easiest/fastest to use.
https://github.com/abacaj/code-eval - This is a large mostly up to date list of benchmarks
https://huggingface.co/WizardLM/WizardCoder-15B-V1.0 - has a chart with a mostly up to date comparison
-
LLaMA2 Chat 70B outperformed ChatGPT
You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.
While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.
-
Claude 2
Since I've been on a AI code-helper kick recently. According to the post, Claude 2 now 71.2%, a significant upgrade from 1.3 (56.0%). It isn't specified whether this is pass@1 or pass@10.
For comparison:
* GPT-4 claims 85.4 on HumanEval, in a recent paper https://arxiv.org/pdf/2303.11366.pdf GPT-4 was tested at 80.1 pass@1 and 91 pass@1 using their Reflexion technique. They also include MBPP and Leetcode Hard benchmark comparisons
* WizardCoder, a StarCoder fine-tune is one of the top open models, scoring a 57.3 pass@1, model card here: https://huggingface.co/WizardLM/WizardCoder-15B-V1.0
* The best open model I know of atm is replit-code-instruct-glaive, a replit-code-3b fine tune, which scores a 63.5% pass@1. An independent developer abacaj has reproduced that announcement as part of code-eval, a repo for getting human-eval results: https://github.com/abacaj/code-eval
Those interested in this area may also want to take a look at this repo https://github.com/my-other-github-account/llm-humaneval-ben... that also ranks with Eval+, the CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... and airate https://github.com/catid/supercharger/tree/main/airate
Also, as with all LLM evals, to be taken with a grain of salt...
Liu, Jiawei, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. “Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation.” arXiv, June 12, 2023. https://doi.org/10.48550/arXiv.2305.01210.
-
Which LLM works for taboo questions or programming like webscraping?
To get an idea of programming performance, my can-ai-code Leaderboard is freshly updated this morning, but also check out the excellent llm-eval and code-eval leaderboards.
-
GPT-4 API general availability
In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.
You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.
That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).
I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)
[1] https://github.com/turboderp/exllama#results-so-far
[2] https://github.com/aigoopy/llm-jeopardy
[3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...
[4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
code-interpreter-packages
-
Claude 2
ChatGPT isn't exactly aware of what packages it has available. If it says it can't, you can just ask it to nicely to try. Here's a list of what it has installed currently: https://github.com/petergpt/code-interpreter-packages/blob/m...
Note, you can also upload statically compiled libs/binaries, even tarballs into its execution environment. I'm not sure how sound that is from a security perspective, but people have been doing it lately (along with a lot of poking around).
What are some alternatives?
llm-humaneval-benchmarks
eval
llama.cpp - LLM inference in C/C++
azure-search-openai-demo - A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.
visqol - Perceptual Quality Estimator for speech and audio
llm-humaneval-ben
gpt4all - gpt4all: run open-source LLMs anywhere
llm-jeopardy - Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts
BIG-Bench-Hard - Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
open_llama - OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
openai-cookbook - Examples and guides for using the OpenAI API
open-llms - 📋 A list of open LLMs available for commercial use.