poe-api VS llm-humaneval-benchmarks

Compare poe-api vs llm-humaneval-benchmarks and see what are their differences.

poe-api

[UNMAINTAINED] A reverse engineered Python API wrapper for Quora's Poe, which provides free access to ChatGPT, GPT-4, and Claude. (by ading2210)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
poe-api llm-humaneval-benchmarks
3 10
2,501 83
- -
8.4 4.9
8 months ago 11 months ago
Python Jupyter Notebook
GNU General Public License v3.0 only MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

poe-api

Posts with mentions or reviews of poe-api. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-05.
  • Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark!
    6 projects | /r/LocalLLaMA | 5 Jun 2023
  • Free Opensource access to popular models such as GPT-3.5, GPT-4, claude, claude+, claude-instant, claude-instant-100k
    1 project | /r/GPT4 | 28 May 2023
    Hi, I don't exactly get the whole calling this "situation" piracy if you would have used a bit more brain cells before saying that by checking my github page or even the discord server thoroughly, it would have been way appreciated, although I will take the time to explain regardless, this program works by using a website called poe.com to gain access to the models by a secure and tested GitHub project called 'poe-api' (already credited on my GitHub page and the program has been modified a bit to prevent/avoid some errors that my implimentation caused), then it uses tokens created from a discord server called FreeGPT-4 and integrates it into a python project (I have permission from the owner, and the person who is "paying" the bills), paying here is quoted since they are not paying for everyone to have access to this as it's most likely not possible, although since the owner has explicitly told not to disclose this information, I, unfortunately, cannot provide details as to how they create these tokens at bulk and not get flagged, although since this project has been up for a while and it has not faced legal actions, it is safe to use it. The final goal of this project is to be integrated as a library so that anyone can integrate it into their programs easily
  • Has anyone tried the new 100k Token AI model?
    1 project | /r/AnthropicAi | 14 May 2023
    Poe.com does not have a public API at this time although there are reverse engineered wrappers like this: https://github.com/ading2210/poe-api

llm-humaneval-benchmarks

Posts with mentions or reviews of llm-humaneval-benchmarks. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-27.
  • LLaMA2 Chat 70B outperformed ChatGPT
    5 projects | news.ycombinator.com | 27 Jul 2023
    You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.

    While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.

  • Claude 2
    6 projects | news.ycombinator.com | 11 Jul 2023
    Since I've been on a AI code-helper kick recently. According to the post, Claude 2 now 71.2%, a significant upgrade from 1.3 (56.0%). It isn't specified whether this is pass@1 or pass@10.

    For comparison:

    * GPT-4 claims 85.4 on HumanEval, in a recent paper https://arxiv.org/pdf/2303.11366.pdf GPT-4 was tested at 80.1 pass@1 and 91 pass@1 using their Reflexion technique. They also include MBPP and Leetcode Hard benchmark comparisons

    * WizardCoder, a StarCoder fine-tune is one of the top open models, scoring a 57.3 pass@1, model card here: https://huggingface.co/WizardLM/WizardCoder-15B-V1.0

    * The best open model I know of atm is replit-code-instruct-glaive, a replit-code-3b fine tune, which scores a 63.5% pass@1. An independent developer abacaj has reproduced that announcement as part of code-eval, a repo for getting human-eval results: https://github.com/abacaj/code-eval

    Those interested in this area may also want to take a look at this repo https://github.com/my-other-github-account/llm-humaneval-ben... that also ranks with Eval+, the CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... and airate https://github.com/catid/supercharger/tree/main/airate

    Also, as with all LLM evals, to be taken with a grain of salt...

    Liu, Jiawei, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. “Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation.” arXiv, June 12, 2023. https://doi.org/10.48550/arXiv.2305.01210.

  • Which LLM works for taboo questions or programming like webscraping?
    2 projects | /r/LocalLLaMA | 9 Jul 2023
    To get an idea of programming performance, my can-ai-code Leaderboard is freshly updated this morning, but also check out the excellent llm-eval and code-eval leaderboards.
  • Official WizardCoder-15B-V1.0 Released! Can Achieve 59.8% Pass@1 on HumanEval!
    5 projects | /r/LocalLLaMA | 15 Jun 2023
    ❗Note: In this study, we copy the scores for HumanEval and HumanEval+ from the LLM-Humaneval-Benchmarks. Notably, all the mentioned models generate code solutions for each problem utilizing a single attempt, and the resulting pass rate percentage is reported. Our WizardCoder generates answers using greedy decoding and tests with the same code.
  • Hi folks, back with an update to the HumanEval+ programming ranking I posted the other day incorporating your feedback - and some closed models for comparison! Now has improved generation params, new models: Falcon, Starcoder, Codegen, Claude+, Bard, OpenAssistant and more
    5 projects | /r/LocalLLaMA | 10 Jun 2023
    I switched to RunPod from SageMaker in the middle of this process and boy am I happy I did. It is way cheaper and easier to scale for a project like this, and I highly recommend it. I have a set of tooling to run tests on it en masse now I am happy with - I will try to get my work up on the Github soon!: https://github.com/my-other-github-account/llm-humaneval-benchmarks
  • All Model Leaderboards (that I know)
    4 projects | /r/LocalLLaMA | 8 Jun 2023
  • Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark!
    6 projects | /r/LocalLLaMA | 5 Jun 2023
    Also, my code I used for this eval is up at https://github.com/my-other-github-account/llm-humaneval-benchmarks/tree/8f3a77eb3508f33a88699aac1c4b10d5e3dc7de1

What are some alternatives?

When comparing poe-api and llm-humaneval-benchmarks you can also consider the following projects:

best-of-web-python - 🏆 A ranked list of awesome python libraries for web development. Updated weekly.

can-ai-code - Self-evaluating interview for AI coders

code-eval - Run evaluation on LLMs using human-eval benchmark

WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

llm-leaderboard - A joint community effort to create one central leaderboard for LLMs.

ggml - Tensor library for machine learning

codealpaca

visqol - Perceptual Quality Estimator for speech and audio

eval

BIG-Bench-Hard - Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them

supercharger - Supercharge Open-Source AI Models