koboldcpp VS exllamav2

Compare koboldcpp vs exllamav2 and see what are their differences.

koboldcpp

A simple one-file way to run various GGML and GGUF models with KoboldAI's UI (by LostRuins)

exllamav2

A fast inference library for running LLMs locally on modern consumer-class GPUs (by turboderp)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
koboldcpp exllamav2
180 17
3,817 2,935
- -
10.0 9.8
2 days ago 1 day ago
C++ Python
GNU Affero General Public License v3.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

koboldcpp

Posts with mentions or reviews of koboldcpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-27.

exllamav2

Posts with mentions or reviews of exllamav2. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-08.
  • Running Llama3 Locally
    1 project | news.ycombinator.com | 20 Apr 2024
  • Mixture-of-Depths: Dynamically allocating compute in transformers
    3 projects | news.ycombinator.com | 8 Apr 2024
    There are already some implementations out there which attempt to accomplish this!

    Here's an example: https://github.com/silphendio/sliced_llama

    A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...

    Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275

    And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...

  • What do you use to run your models?
    14 projects | /r/LocalLLaMA | 7 Dec 2023
    Sorry, I'm somewhat familiar with this term (I've seen it as a model loader in Oobabooga), but still not following the correlation here. Are you saying I should instead be using this project in lieu of llama.cpp? Or are you saying that there is, perhaps, an exllamav2 "extension" or similar within llama.cpp that I can use?
  • I just started having problems with the colab again. I get errors and it just stops. Help?
    1 project | /r/SillyTavernAI | 5 Dec 2023
    EDIT: I reported the bug to the exllamav2 Github. It's actually already fixed, just not on any current built release.
  • Yi-34B-200K works on a single 3090 with 47K context/4bpw
    1 project | /r/LocalLLaMA | 8 Nov 2023
    install exllamav2 from git with pip install git+https://github.com/turboderp/exllamav2.git. Make sure you have flash attention 2 as well.
  • Tested: ExllamaV2's max context on 24gb with 70B low-bpw & speculative sampling performance
    2 projects | /r/LocalLLaMA | 2 Nov 2023
    Recent releases for exllamav2 brings working fp8 cache support, which I've been very excited to test. This feature doubles the maximum context length you can run with your model, without any visible downsides.
  • Show HN: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
    9 projects | news.ycombinator.com | 31 Oct 2023
    Without batching, I was actually thinking that's kind of modest.

    ExllamaV2 will get 48 tokens/s on a 4090, which is much slower/cheaper than an H100:

    https://github.com/turboderp/exllamav2#performance

    I didn't test codellama, but the 3090 TI figures are in the ballpark of my generation speed on a 3090.

  • Guide for Llama2 70b model merging and exllama2 quantization
    2 projects | /r/LocalLLaMA | 24 Oct 2023
    First, you need the convert.py script from turboderp's Exllama2 repo. You can read all about the convert.py arguments here.
  • LLM Falcon 180B Needs 720GB RAM to Run
    1 project | news.ycombinator.com | 24 Sep 2023
    > brute aggressive quantization

    Cutting edge quantization like ExLlama's EX2 is far from brute force: https://github.com/turboderp/exllamav2#exl2-quantization

    > The format allows for mixing quantization levels within a model to achieve any average bitrate between 2 and 8 bits per weight. Moreover, it's possible to apply multiple quantization levels to each linear layer, producing something akin to sparse quantization wherein more important weights (columns) are quantized with more bits. The same remapping trick that lets ExLlama work efficiently with act-order models allows this mixing of formats to happen with little to no impact on performance. Parameter selection is done automatically by quantizing each matrix multiple times, measuring the quantization error (with respect to the chosen calibration data) for each of a number of possible settings, per layer. Finally, a combination is chosen that minimizes the maximum quantization error over the entire model while meeting a target average bitrate.

    Llama.cpp is also working on a feature that let's a small model "guess" the output of a big model which then "checks" it for correctness. This is more of a performance feature, but you could also arrange it to accelerate a big model on a small GPU.

  • 70B Llama 2 at 35tokens/second on 4090
    1 project | /r/patient_hackernews | 14 Sep 2023

What are some alternatives?

When comparing koboldcpp and exllamav2 you can also consider the following projects:

KoboldAI

llama.cpp - LLM inference in C/C++

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

SillyTavern - LLM Frontend for Power Users.

TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)

ChatGPT-AutoExpert - 🚀🧠💬 Supercharged Custom Instructions for ChatGPT (non-coding) and ChatGPT Advanced Data Analysis (coding).

KoboldAI

BlockMerge_Gradient - Merge Transformers language models by use of gradient parameters.

ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.

OmniQuant - [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.

SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]

gptq - Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".