llm-humaneval-benchmarks

By my-other-github-account

Llm-humaneval-benchmarks Alternatives

Similar projects and alternatives to llm-humaneval-benchmarks

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better llm-humaneval-benchmarks alternative or higher similarity.

llm-humaneval-benchmarks reviews and mentions

Posts with mentions or reviews of llm-humaneval-benchmarks. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-27.
  • LLaMA2 Chat 70B outperformed ChatGPT
    5 projects | news.ycombinator.com | 27 Jul 2023
    You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.

    While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.

  • Claude 2
    6 projects | news.ycombinator.com | 11 Jul 2023
    Since I've been on a AI code-helper kick recently. According to the post, Claude 2 now 71.2%, a significant upgrade from 1.3 (56.0%). It isn't specified whether this is pass@1 or pass@10.

    For comparison:

    * GPT-4 claims 85.4 on HumanEval, in a recent paper https://arxiv.org/pdf/2303.11366.pdf GPT-4 was tested at 80.1 pass@1 and 91 pass@1 using their Reflexion technique. They also include MBPP and Leetcode Hard benchmark comparisons

    * WizardCoder, a StarCoder fine-tune is one of the top open models, scoring a 57.3 pass@1, model card here: https://huggingface.co/WizardLM/WizardCoder-15B-V1.0

    * The best open model I know of atm is replit-code-instruct-glaive, a replit-code-3b fine tune, which scores a 63.5% pass@1. An independent developer abacaj has reproduced that announcement as part of code-eval, a repo for getting human-eval results: https://github.com/abacaj/code-eval

    Those interested in this area may also want to take a look at this repo https://github.com/my-other-github-account/llm-humaneval-ben... that also ranks with Eval+, the CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... and airate https://github.com/catid/supercharger/tree/main/airate

    Also, as with all LLM evals, to be taken with a grain of salt...

    Liu, Jiawei, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. “Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation.” arXiv, June 12, 2023. https://doi.org/10.48550/arXiv.2305.01210.

  • Which LLM works for taboo questions or programming like webscraping?
    2 projects | /r/LocalLLaMA | 9 Jul 2023
    To get an idea of programming performance, my can-ai-code Leaderboard is freshly updated this morning, but also check out the excellent llm-eval and code-eval leaderboards.
  • Official WizardCoder-15B-V1.0 Released! Can Achieve 59.8% Pass@1 on HumanEval!
    5 projects | /r/LocalLLaMA | 15 Jun 2023
    ❗Note: In this study, we copy the scores for HumanEval and HumanEval+ from the LLM-Humaneval-Benchmarks. Notably, all the mentioned models generate code solutions for each problem utilizing a single attempt, and the resulting pass rate percentage is reported. Our WizardCoder generates answers using greedy decoding and tests with the same code.
  • Hi folks, back with an update to the HumanEval+ programming ranking I posted the other day incorporating your feedback - and some closed models for comparison! Now has improved generation params, new models: Falcon, Starcoder, Codegen, Claude+, Bard, OpenAssistant and more
    5 projects | /r/LocalLLaMA | 10 Jun 2023
    I switched to RunPod from SageMaker in the middle of this process and boy am I happy I did. It is way cheaper and easier to scale for a project like this, and I highly recommend it. I have a set of tooling to run tests on it en masse now I am happy with - I will try to get my work up on the Github soon!: https://github.com/my-other-github-account/llm-humaneval-benchmarks
  • All Model Leaderboards (that I know)
    4 projects | /r/LocalLLaMA | 8 Jun 2023
  • Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark!
    6 projects | /r/LocalLLaMA | 5 Jun 2023
    Also, my code I used for this eval is up at https://github.com/my-other-github-account/llm-humaneval-benchmarks/tree/8f3a77eb3508f33a88699aac1c4b10d5e3dc7de1
  • A note from our sponsor - WorkOS
    workos.com | 30 Apr 2024
    The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →

Stats

Basic llm-humaneval-benchmarks repo stats
10
83
4.9
11 months ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com