exllama VS llama

Compare exllama vs llama and see what are their differences.

exllama

A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights. (by turboderp)

llama

Inference code for Llama models (by meta-llama)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
exllama llama
64 184
2,609 53,227
- 2.7%
9.0 8.1
7 months ago 10 days ago
Python Python
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

exllama

Posts with mentions or reviews of exllama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-09.
  • Any way to optimally use GPU for faster llama calls?
    1 project | /r/LocalLLaMA | 27 Sep 2023
    not using exllama seems like the tremendous waste
  • ExLlama: Memory efficient way to run Llama
    1 project | news.ycombinator.com | 15 Aug 2023
  • Ask HN: Cheapest hardware to run Llama 2 70B
    5 projects | news.ycombinator.com | 9 Aug 2023
  • Llama Is Expensive
    1 project | news.ycombinator.com | 20 Jul 2023
    > We serve Llama on 2 80-GB A100 GPUs, as that is the minumum required to fit Llama in memory (with 16-bit precision)

    Well there is your problem.

    LLaMA quantized to 4 bits fits in 40GB. And it gets similar throughput split between dual consumer GPUs, which likely means better throughput on a single 40GB A100 (or a cheaper 48GB Pro GPU)

    https://github.com/turboderp/exllama#dual-gpu-results

    Also, I'm not sure which model was tested, but Llama 70B chat should have better performance than the base model if the prompting syntax is right. That was only reverse engineered from the Meta demo implementation recently.

  • Accessing Llama 2 from the command-line with the LLM-replicate plugin
    16 projects | news.ycombinator.com | 18 Jul 2023
    For those getting started, the easiest one click installer I've used is Nomic.ai's gpt4all: https://gpt4all.io/

    This runs with a simple GUI on Windows/Mac/Linux, leverages a fork of llama.cpp on the backend and supports GPU acceleration, and LLaMA, Falcon, MPT, and GPT-J models. It also has API/CLI bindings.

    I just saw a slick new tool https://ollama.ai/ that will let you install a llama2-7b with a single `ollama run llama2` command that has a very simple 1-click installer for Apple Silicon Mac (but need to build from source for anything else atm). It looks like it only supports llamas OOTB but it also seems to use llama.cpp (via Go adapter) on the backend - it seemed to be CPU-only on my MBA, but I didn't poke too much and it's brand new, so we'll see.

    For anyone on HN, they should probably be looking at https://github.com/ggerganov/llama.cpp and https://github.com/ggerganov/ggml directly. If you have a high-end Nvidia consumer card (3090/4090) I'd highly recommend looking into https://github.com/turboderp/exllama

    For those generally confused, the r/LocalLLaMA wiki is a good place to start: https://www.reddit.com/r/LocalLLaMA/wiki/guide/

    I've also been porting my own notes into a single location that tracks models, evals, and has guides focused on local models: https://llm-tracker.info/

  • GPT-4 Details Leaked
    3 projects | news.ycombinator.com | 10 Jul 2023
    Deploying the 60B version is a challenge though and you might need to apply 4-bit quantization with something like https://github.com/PanQiWei/AutoGPTQ or https://github.com/qwopqwop200/GPTQ-for-LLaMa . Then you can improve the inference speed by using https://github.com/turboderp/exllama .

    If you prefer to use an "instruct" model à la ChatGPT (i.e. that does not need few-shot learning to output good results) you can use something like this: https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...

  • Multi-GPU questions
    1 project | /r/LocalLLaMA | 9 Jul 2023
    Exllama for example uses buffers on each card that reduce the amount of VRAM available for model and context, see here. https://github.com/turboderp/exllama/issues/121
  • A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
    5 projects | /r/LocalLLaMA | 7 Jul 2023
    For inference step, this repo can help you to use ExLlama to perform inference on an evaluation dataset for the best throughput.
  • GPT-4 API general availability
    15 projects | news.ycombinator.com | 6 Jul 2023
    In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.

    You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.

    That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).

    I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)

    [1] https://github.com/turboderp/exllama#results-so-far

    [2] https://github.com/aigoopy/llm-jeopardy

    [3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...

    [4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder

  • Local LLMs GPUs
    2 projects | /r/LocalLLaMA | 4 Jul 2023
    That's a 16GB GPU, you should be able to fit 13B at 4bit: https://github.com/turboderp/exllama

llama

Posts with mentions or reviews of llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-18.
  • Mark Zuckerberg: Llama 3, $10B Models, Caesar Augustus, Bioweapons [video]
    3 projects | news.ycombinator.com | 18 Apr 2024
    derivative works thereof).”

    https://github.com/meta-llama/llama/blob/b8348da38fde8644ef0...

    Also even if you did use Llama for something, they could unilaterally pull the rug on you when you got 700 million years, AND anyone who thinks Meta broke their copyright loses their license. (Checking if you are still getting screwed is against the rules)

    Therefore, Zuckerberg is accountable for explicitly anticompetitive conduct, I assumed an MMA fighter would appreciate the value of competition, go figure.

  • Hello OLMo: A Open LLM
    3 projects | news.ycombinator.com | 8 Apr 2024
    One thing I wanted to add and call attention to is the importance of licensing in open models. This is often overlooked when we blindly accept the vague branding of models as “open”, but I am noticing that many open weight models are actually using encumbered proprietary licenses rather than standard open source licenses that are OSI approved (https://opensource.org/licenses). As an example, Databricks’s DBRX model has a proprietary license that forces adherence to their highly restrictive Acceptable Use Policy by referencing a live website hosting their AUP (https://github.com/databricks/dbrx/blob/main/LICENSE), which means as they change their AUP, you may be further restricted in the future. Meta’s Llama is similar (https://github.com/meta-llama/llama/blob/main/LICENSE ). I’m not sure who can depend on these models given this flaw.
  • Reaching LLaMA2 Performance with 0.1M Dollars
    2 projects | news.ycombinator.com | 4 Apr 2024
    It looks like Llama 2 7B took 184,320 A100-80GB GPU-hours to train[1]. This one says it used a 96×H100 GPU cluster for 2 weeks, for 32,256 hours. That's 17.5% of the number of hours, but H100s are faster than A100s [2] and FP16/bfloat16 performance is ~3x better.

    If they had tried to replicate Llama 2 identically with their hardware setup, it'd cost a little bit less than twice their MoE model.

    [1] https://github.com/meta-llama/llama/blob/main/MODEL_CARD.md#...

  • DBRX: A New Open LLM
    6 projects | news.ycombinator.com | 27 Mar 2024
    Ironically, the LLaMA license text [1] this is lifted verbatim from is itself copyrighted [2] and doesn't grant you the permission to copy it or make changes like s/meta/dbrx/g lol.

    [1] https://github.com/meta-llama/llama/blob/main/LICENSE#L65

  • How Chain-of-Thought Reasoning Helps Neural Networks Compute
    1 project | news.ycombinator.com | 22 Mar 2024
    This is kind of an epistemological debate at this level, and I make an effort to link to some source code [1] any time it seems contentious.

    LLMs (of the decoder-only, generative-pretrained family everyone means) are next token predictors in a literal implementation sense (there are some caveats around batching and what not, but none that really matter to the philosophy of the thing).

    But, they have some emergent behaviors that are a trickier beast. Probably the best way to think about a typical Instruct-inspired “chat bot” session is of them sampling from a distribution with a KL-style adjacency to the training corpus (sidebar: this is why shops that do and don’t train/tune on MMLU get ranked so differently than e.g. the arena rankings) at a response granularity, the same way a diffuser/U-net/de-noising model samples at the image batch (NCHW/NHWC) level.

    The corpus is stocked with everything from sci-fi novels with computers arguing their own sentience to tutorials on how to do a tricky anti-derivative step-by-step.

    This mental model has adequate explanatory power for anything a public LLM has ever been shown to do, but that only heavily implies it’s what they’re doing.

    There is active research into whether there is more going on that is thus far not conclusive to the satisfaction of an unbiased consensus. I personally think that research will eventually show it’s just sampling, but that’s a prediction not consensus science.

    They might be doing more, there is some research that represents circumstantial evidence they are doing more.

    [1] https://github.com/meta-llama/llama/blob/54c22c0d63a3f3c9e77...

  • Asking Meta to stop using the term "open source" for Llama
    1 project | news.ycombinator.com | 28 Feb 2024
  • Markov Chains Are the Original Language Models
    2 projects | news.ycombinator.com | 1 Feb 2024
    Predicting subsequent text is pretty much exactly what they do. Lots of very cool engineering that’s a real feat, but at its core it’s argmax(P(token|token,corpus)):

    https://github.com/facebookresearch/llama/blob/main/llama/ge...

    The engineering feats are up there with anything, but it’s a next token predictor.

  • Meta AI releases Code Llama 70B
    6 projects | news.ycombinator.com | 29 Jan 2024
    https://github.com/facebookresearch/llama/pull/947/
  • Stuff we figured out about AI in 2023
    5 projects | news.ycombinator.com | 1 Jan 2024
    > Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!

    actually its not just a basic version. Llama 1/2's model.py is 500 lines: https://github.com/facebookresearch/llama/blob/main/llama/mo...

    Mistral (is rumored to have) forked llama and is 369 lines: https://github.com/mistralai/mistral-src/blob/main/mistral/m...

    and both of these are SOTA open source models.

  • [D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
    3 projects | /r/MachineLearning | 10 Dec 2023
    In transformers, they tried really hard to have a single function or method to deal with both self and cross attention mechanisms, masking, positional and relative encodings, interpolation etc. While it allows a user to use the same function/method for any model, it has led to severe parameter bloat. Just compare the original implementation of llama by FAIR with the implementation by HF to get an idea.

What are some alternatives?

When comparing exllama and llama you can also consider the following projects:

llama.cpp - LLM inference in C/C++

langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

GPTQ-for-LLaMa - 4 bits quantization of LLaMa using GPTQ

chatgpt-vscode - A VSCode extension that allows you to use ChatGPT

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

KoboldAI

text-generation-inference - Large Language Model Text Generation Inference

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.