llm-humaneval-ben VS code-interpreter-packages

Compare llm-humaneval-ben vs code-interpreter-packages and see what are their differences.

llm-humaneval-ben

By my-other-github-account

code-interpreter-packages

A list of all packages and their descriptions in code interpreter as of 12 July 2023 (by petergpt)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
llm-humaneval-ben code-interpreter-packages
2 1
- 31
- -
- 6.2
- 9 months ago
- -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

llm-humaneval-ben

Posts with mentions or reviews of llm-humaneval-ben. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-27.
  • LLaMA2 Chat 70B outperformed ChatGPT
    5 projects | news.ycombinator.com | 27 Jul 2023
    You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.

    While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.

  • Claude 2
    6 projects | news.ycombinator.com | 11 Jul 2023
    Since I've been on a AI code-helper kick recently. According to the post, Claude 2 now 71.2%, a significant upgrade from 1.3 (56.0%). It isn't specified whether this is pass@1 or pass@10.

    For comparison:

    * GPT-4 claims 85.4 on HumanEval, in a recent paper https://arxiv.org/pdf/2303.11366.pdf GPT-4 was tested at 80.1 pass@1 and 91 pass@1 using their Reflexion technique. They also include MBPP and Leetcode Hard benchmark comparisons

    * WizardCoder, a StarCoder fine-tune is one of the top open models, scoring a 57.3 pass@1, model card here: https://huggingface.co/WizardLM/WizardCoder-15B-V1.0

    * The best open model I know of atm is replit-code-instruct-glaive, a replit-code-3b fine tune, which scores a 63.5% pass@1. An independent developer abacaj has reproduced that announcement as part of code-eval, a repo for getting human-eval results: https://github.com/abacaj/code-eval

    Those interested in this area may also want to take a look at this repo https://github.com/my-other-github-account/llm-humaneval-ben... that also ranks with Eval+, the CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... and airate https://github.com/catid/supercharger/tree/main/airate

    Also, as with all LLM evals, to be taken with a grain of salt...

    Liu, Jiawei, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. “Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation.” arXiv, June 12, 2023. https://doi.org/10.48550/arXiv.2305.01210.

code-interpreter-packages

Posts with mentions or reviews of code-interpreter-packages. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-11.
  • Claude 2
    6 projects | news.ycombinator.com | 11 Jul 2023
    ChatGPT isn't exactly aware of what packages it has available. If it says it can't, you can just ask it to nicely to try. Here's a list of what it has installed currently: https://github.com/petergpt/code-interpreter-packages/blob/m...

    Note, you can also upload statically compiled libs/binaries, even tarballs into its execution environment. I'm not sure how sound that is from a security perspective, but people have been doing it lately (along with a lot of poking around).

What are some alternatives?

When comparing llm-humaneval-ben and code-interpreter-packages you can also consider the following projects:

visqol - Perceptual Quality Estimator for speech and audio

eval

code-eval - Run evaluation on LLMs using human-eval benchmark

BIG-Bench-Hard - Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them

llm-humaneval-benchmarks