exllamav2

A fast inference library for running LLMs locally on modern consumer-class GPUs (by turboderp)

Exllamav2 Alternatives

Similar projects and alternatives to exllamav2

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better exllamav2 alternative or higher similarity.

exllamav2 reviews and mentions

Posts with mentions or reviews of exllamav2. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-08.
  • Running Llama3 Locally
    1 project | news.ycombinator.com | 20 Apr 2024
  • Mixture-of-Depths: Dynamically allocating compute in transformers
    3 projects | news.ycombinator.com | 8 Apr 2024
    There are already some implementations out there which attempt to accomplish this!

    Here's an example: https://github.com/silphendio/sliced_llama

    A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...

    Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275

    And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...

  • What do you use to run your models?
    14 projects | /r/LocalLLaMA | 7 Dec 2023
    Sorry, I'm somewhat familiar with this term (I've seen it as a model loader in Oobabooga), but still not following the correlation here. Are you saying I should instead be using this project in lieu of llama.cpp? Or are you saying that there is, perhaps, an exllamav2 "extension" or similar within llama.cpp that I can use?
  • I just started having problems with the colab again. I get errors and it just stops. Help?
    1 project | /r/SillyTavernAI | 5 Dec 2023
    EDIT: I reported the bug to the exllamav2 Github. It's actually already fixed, just not on any current built release.
  • Yi-34B-200K works on a single 3090 with 47K context/4bpw
    1 project | /r/LocalLLaMA | 8 Nov 2023
    install exllamav2 from git with pip install git+https://github.com/turboderp/exllamav2.git. Make sure you have flash attention 2 as well.
  • Tested: ExllamaV2's max context on 24gb with 70B low-bpw & speculative sampling performance
    2 projects | /r/LocalLLaMA | 2 Nov 2023
    Recent releases for exllamav2 brings working fp8 cache support, which I've been very excited to test. This feature doubles the maximum context length you can run with your model, without any visible downsides.
  • Show HN: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
    9 projects | news.ycombinator.com | 31 Oct 2023
    Without batching, I was actually thinking that's kind of modest.

    ExllamaV2 will get 48 tokens/s on a 4090, which is much slower/cheaper than an H100:

    https://github.com/turboderp/exllamav2#performance

    I didn't test codellama, but the 3090 TI figures are in the ballpark of my generation speed on a 3090.

  • Guide for Llama2 70b model merging and exllama2 quantization
    2 projects | /r/LocalLLaMA | 24 Oct 2023
    First, you need the convert.py script from turboderp's Exllama2 repo. You can read all about the convert.py arguments here.
  • LLM Falcon 180B Needs 720GB RAM to Run
    1 project | news.ycombinator.com | 24 Sep 2023
    > brute aggressive quantization

    Cutting edge quantization like ExLlama's EX2 is far from brute force: https://github.com/turboderp/exllamav2#exl2-quantization

    > The format allows for mixing quantization levels within a model to achieve any average bitrate between 2 and 8 bits per weight. Moreover, it's possible to apply multiple quantization levels to each linear layer, producing something akin to sparse quantization wherein more important weights (columns) are quantized with more bits. The same remapping trick that lets ExLlama work efficiently with act-order models allows this mixing of formats to happen with little to no impact on performance. Parameter selection is done automatically by quantizing each matrix multiple times, measuring the quantization error (with respect to the chosen calibration data) for each of a number of possible settings, per layer. Finally, a combination is chosen that minimizes the maximum quantization error over the entire model while meeting a target average bitrate.

    Llama.cpp is also working on a feature that let's a small model "guess" the output of a big model which then "checks" it for correctness. This is more of a performance feature, but you could also arrange it to accelerate a big model on a small GPU.

  • 70B Llama 2 at 35tokens/second on 4090
    1 project | /r/patient_hackernews | 14 Sep 2023
  • A note from our sponsor - SaaSHub
    www.saashub.com | 2 May 2024
    SaaSHub helps you find the best software and product alternatives Learn more →

Stats

Basic exllamav2 repo stats
17
2,935
9.8
2 days ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com