code-eval VS exllama

Compare code-eval vs exllama and see what are their differences.

code-eval

Run evaluation on LLMs using human-eval benchmark (by abacaj)

exllama

A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights. (by turboderp)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
code-eval exllama
5 64
349 2,615
- -
8.0 9.0
8 months ago 8 months ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

code-eval

Posts with mentions or reviews of code-eval. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-27.
  • Ask HN: LLM Leaderboard for Code Generation?
    1 project | news.ycombinator.com | 14 Aug 2023
    You're looking for "HumanEval" tests. Not saying this is the best way to test it, but it's the only standard test I know of that code models are compared with and are commonly benchmarked for

    The current best models you'd want to try that I'm aware of is WizardCoder(15B), Starcoder(15B), and replit's code model(3B). Replit's instruct model is interesting because of it's competitive performance while only being a 3B model so it's the easiest/fastest to use.

    https://github.com/abacaj/code-eval - This is a large mostly up to date list of benchmarks

    https://huggingface.co/WizardLM/WizardCoder-15B-V1.0 - has a chart with a mostly up to date comparison

  • LLaMA2 Chat 70B outperformed ChatGPT
    5 projects | news.ycombinator.com | 27 Jul 2023
    You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.

    While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.

  • Claude 2
    6 projects | news.ycombinator.com | 11 Jul 2023
    Since I've been on a AI code-helper kick recently. According to the post, Claude 2 now 71.2%, a significant upgrade from 1.3 (56.0%). It isn't specified whether this is pass@1 or pass@10.

    For comparison:

    * GPT-4 claims 85.4 on HumanEval, in a recent paper https://arxiv.org/pdf/2303.11366.pdf GPT-4 was tested at 80.1 pass@1 and 91 pass@1 using their Reflexion technique. They also include MBPP and Leetcode Hard benchmark comparisons

    * WizardCoder, a StarCoder fine-tune is one of the top open models, scoring a 57.3 pass@1, model card here: https://huggingface.co/WizardLM/WizardCoder-15B-V1.0

    * The best open model I know of atm is replit-code-instruct-glaive, a replit-code-3b fine tune, which scores a 63.5% pass@1. An independent developer abacaj has reproduced that announcement as part of code-eval, a repo for getting human-eval results: https://github.com/abacaj/code-eval

    Those interested in this area may also want to take a look at this repo https://github.com/my-other-github-account/llm-humaneval-ben... that also ranks with Eval+, the CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... and airate https://github.com/catid/supercharger/tree/main/airate

    Also, as with all LLM evals, to be taken with a grain of salt...

    Liu, Jiawei, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. “Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation.” arXiv, June 12, 2023. https://doi.org/10.48550/arXiv.2305.01210.

  • Which LLM works for taboo questions or programming like webscraping?
    2 projects | /r/LocalLLaMA | 9 Jul 2023
    To get an idea of programming performance, my can-ai-code Leaderboard is freshly updated this morning, but also check out the excellent llm-eval and code-eval leaderboards.
  • GPT-4 API general availability
    15 projects | news.ycombinator.com | 6 Jul 2023
    In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.

    You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.

    That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).

    I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)

    [1] https://github.com/turboderp/exllama#results-so-far

    [2] https://github.com/aigoopy/llm-jeopardy

    [3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...

    [4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder

exllama

Posts with mentions or reviews of exllama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-09.
  • Any way to optimally use GPU for faster llama calls?
    1 project | /r/LocalLLaMA | 27 Sep 2023
    not using exllama seems like the tremendous waste
  • ExLlama: Memory efficient way to run Llama
    1 project | news.ycombinator.com | 15 Aug 2023
  • Ask HN: Cheapest hardware to run Llama 2 70B
    5 projects | news.ycombinator.com | 9 Aug 2023
  • Llama Is Expensive
    1 project | news.ycombinator.com | 20 Jul 2023
    > We serve Llama on 2 80-GB A100 GPUs, as that is the minumum required to fit Llama in memory (with 16-bit precision)

    Well there is your problem.

    LLaMA quantized to 4 bits fits in 40GB. And it gets similar throughput split between dual consumer GPUs, which likely means better throughput on a single 40GB A100 (or a cheaper 48GB Pro GPU)

    https://github.com/turboderp/exllama#dual-gpu-results

    Also, I'm not sure which model was tested, but Llama 70B chat should have better performance than the base model if the prompting syntax is right. That was only reverse engineered from the Meta demo implementation recently.

  • Accessing Llama 2 from the command-line with the LLM-replicate plugin
    16 projects | news.ycombinator.com | 18 Jul 2023
    For those getting started, the easiest one click installer I've used is Nomic.ai's gpt4all: https://gpt4all.io/

    This runs with a simple GUI on Windows/Mac/Linux, leverages a fork of llama.cpp on the backend and supports GPU acceleration, and LLaMA, Falcon, MPT, and GPT-J models. It also has API/CLI bindings.

    I just saw a slick new tool https://ollama.ai/ that will let you install a llama2-7b with a single `ollama run llama2` command that has a very simple 1-click installer for Apple Silicon Mac (but need to build from source for anything else atm). It looks like it only supports llamas OOTB but it also seems to use llama.cpp (via Go adapter) on the backend - it seemed to be CPU-only on my MBA, but I didn't poke too much and it's brand new, so we'll see.

    For anyone on HN, they should probably be looking at https://github.com/ggerganov/llama.cpp and https://github.com/ggerganov/ggml directly. If you have a high-end Nvidia consumer card (3090/4090) I'd highly recommend looking into https://github.com/turboderp/exllama

    For those generally confused, the r/LocalLLaMA wiki is a good place to start: https://www.reddit.com/r/LocalLLaMA/wiki/guide/

    I've also been porting my own notes into a single location that tracks models, evals, and has guides focused on local models: https://llm-tracker.info/

  • GPT-4 Details Leaked
    3 projects | news.ycombinator.com | 10 Jul 2023
    Deploying the 60B version is a challenge though and you might need to apply 4-bit quantization with something like https://github.com/PanQiWei/AutoGPTQ or https://github.com/qwopqwop200/GPTQ-for-LLaMa . Then you can improve the inference speed by using https://github.com/turboderp/exllama .

    If you prefer to use an "instruct" model à la ChatGPT (i.e. that does not need few-shot learning to output good results) you can use something like this: https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...

  • Multi-GPU questions
    1 project | /r/LocalLLaMA | 9 Jul 2023
    Exllama for example uses buffers on each card that reduce the amount of VRAM available for model and context, see here. https://github.com/turboderp/exllama/issues/121
  • A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
    5 projects | /r/LocalLLaMA | 7 Jul 2023
    For inference step, this repo can help you to use ExLlama to perform inference on an evaluation dataset for the best throughput.
  • GPT-4 API general availability
    15 projects | news.ycombinator.com | 6 Jul 2023
    In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.

    You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.

    That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).

    I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)

    [1] https://github.com/turboderp/exllama#results-so-far

    [2] https://github.com/aigoopy/llm-jeopardy

    [3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...

    [4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder

  • Local LLMs GPUs
    2 projects | /r/LocalLLaMA | 4 Jul 2023
    That's a 16GB GPU, you should be able to fit 13B at 4bit: https://github.com/turboderp/exllama

What are some alternatives?

When comparing code-eval and exllama you can also consider the following projects:

llm-humaneval-benchmarks

llama.cpp - LLM inference in C/C++

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

azure-search-openai-demo - A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.

GPTQ-for-LLaMa - 4 bits quantization of LLaMa using GPTQ

visqol - Perceptual Quality Estimator for speech and audio

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

llm-humaneval-ben

KoboldAI

gpt4all - gpt4all: run open-source LLMs anywhere

text-generation-inference - Large Language Model Text Generation Inference