ROCm VS exllama

Compare ROCm vs exllama and see what are their differences.

ROCm

AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm] (by RadeonOpenCompute)

exllama

A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights. (by turboderp)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
ROCm exllama
198 64
3,637 2,582
- -
0.0 9.0
5 months ago 7 months ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ROCm

Posts with mentions or reviews of ROCm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-06.
  • AMD May Get Across the CUDA Moat
    8 projects | news.ycombinator.com | 6 Oct 2023
    Yep, did exactly that. IMO he threw a fit, even though AMD was working with him squashing bugs. https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...
  • ROCm 5.7.0 Release
    1 project | /r/ROCm | 26 Sep 2023
  • ROCm Is AMD's #1 Priority, Executive Says
    5 projects | news.ycombinator.com | 26 Sep 2023
    Ok, I wonder what's wrong. maybe it's this? https://stackoverflow.com/questions/4959621/error-1001-in-cl...

    Nope. Anything about this on the arch wiki? Nope

    This bug report[2] from 2021? Maybe I need to update my groups.

    [2]: https://github.com/RadeonOpenCompute/ROCm/issues/1411

        $ ls -la /dev/kfd
  • Simplifying GPU Application Development with HMM
    2 projects | news.ycombinator.com | 29 Aug 2023
    HMM is, I believe, a Linux feature.

    AMD added HMM support in ROCm 5.0 according to this: https://github.com/RadeonOpenCompute/ROCm/blob/develop/CHANG...

  • AMD Ryzen APU turned into a 16GB VRAM GPU and it can run Stable Diffusion
    3 projects | news.ycombinator.com | 17 Aug 2023
    Woot AMD now supports APU? I sold my notebook as i hit a wall when trying rocm [1] Is there a list oft Wirkung apu's ?

    [1] https://github.com/RadeonOpenCompute/ROCm/issues/1587

  • Nvidia's CUDA Monopoly
    3 projects | news.ycombinator.com | 7 Aug 2023
    Last I heard he's abandoned working with AMD products.

    https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

  • Nvidia H100 GPUs: Supply and Demand
    2 projects | news.ycombinator.com | 1 Aug 2023
    They're talking about the meltdown he had on stream [1] (in front of the mentioned pirate flag), that ended with him saying he'd stop using AMD hardware [2]. He recanted this two weeks after talking with AMD [3].

    Maybe he'll succeed, but this definitely doesn't scream stability to me. I'd be wary of investing money into his ventures (but then I'm not a VC, so what do I know).

    [1] https://www.youtube.com/watch?v=Mr0rWJhv9jU

    [2] https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

    [3] https://twitter.com/realGeorgeHotz/status/166980346408248934...

  • Open or closed source Nvidia driver?
    1 project | /r/linux | 9 Jul 2023
    As for rocm support on consumer devices, AMD wont even clarify what devices are supported. https://github.com/RadeonOpenCompute/ROCm/pull/1738
  • Why Nvidia Keeps Winning: The Rise of an AI Giant
    3 projects | news.ycombinator.com | 6 Jul 2023
    He flamed out, then is back after Lisa Su called him (lmao)

    https://geohot.github.io/blog/jekyll/update/2023/05/24/the-t...

    https://www.youtube.com/watch?v=Mr0rWJhv9jU

    https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

    https://geohot.github.io/blog/jekyll/update/2023/06/07/a-div...

    On a personal level that youtube doesn't make him come off looking that good... like people are trying to get patches to him and generally soothe him/damage control and he's just being a bit of a manchild. And it sounds like that's the general course of events around a lot of his "efforts".

    On the other hand he's not wrong either, having this private build inside AMD and not even validating official, supported configurations for the officially supported non-private builds they show to the world isn't a good look, and that's just the very start of the problems around ROCm. AMD's OpenCL runtime was never stable or good either and every experience I've heard with it was "we spent so much time fighting AMD-specific runtime bugs and specs jank that what we ended up with was essentially vendor-proprietary anyway".

    On the other other hand, it sounds like AMD know this is a mess and has some big stability/maturity improvements in the pipeline. It seems clear from some of the smoke coming out of the building that they're cooking on more general ROCm support for RDNA cards, and generally working to patch the maturity and stability issues he's talking about. I hate the "wait for drivers/new software release bro it's gonna fix everything" that surrounds AMD products but in this case I'm at least hopeful they seem to understand the problem, even if it's completely absurdly late.

    Some of what he was viewing as "the process happening in secret" was likely people doing rush patches on the latest build to accommodate him, and he comes off as berating them over it. Again, like, that stream just comes off as "mercurial manchild" not coding genius. And everyone knew the driver situation is bad, that's why there's notionally alpha for him to realize here in the first place. He's bumping into moneymakers, and getting mad about it.

  • Disable "SetTensor/CopyTensor" console logging.
    2 projects | /r/ROCm | 6 Jul 2023
    I tried to train another model using InceptionResNetV2 and the same issues happens. Also, this happens even using the model.predict() method if using the GPU. Probably this is an issue related to the AMD Radeon RX 6700 XT or some mine misconfiguration. System Inormation: ArchLinux 6.1.32-1-lts - AMD Radeon RX 6700 XT - gfx1031 Opened issues: - https://github.com/RadeonOpenCompute/ROCm/issues/2250 - https://github.com/ROCmSoftwarePlatform/tensorflow-upstream/issues/2125

exllama

Posts with mentions or reviews of exllama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-09.
  • Any way to optimally use GPU for faster llama calls?
    1 project | /r/LocalLLaMA | 27 Sep 2023
    not using exllama seems like the tremendous waste
  • ExLlama: Memory efficient way to run Llama
    1 project | news.ycombinator.com | 15 Aug 2023
  • Ask HN: Cheapest hardware to run Llama 2 70B
    5 projects | news.ycombinator.com | 9 Aug 2023
  • Llama Is Expensive
    1 project | news.ycombinator.com | 20 Jul 2023
    > We serve Llama on 2 80-GB A100 GPUs, as that is the minumum required to fit Llama in memory (with 16-bit precision)

    Well there is your problem.

    LLaMA quantized to 4 bits fits in 40GB. And it gets similar throughput split between dual consumer GPUs, which likely means better throughput on a single 40GB A100 (or a cheaper 48GB Pro GPU)

    https://github.com/turboderp/exllama#dual-gpu-results

    Also, I'm not sure which model was tested, but Llama 70B chat should have better performance than the base model if the prompting syntax is right. That was only reverse engineered from the Meta demo implementation recently.

  • Accessing Llama 2 from the command-line with the LLM-replicate plugin
    16 projects | news.ycombinator.com | 18 Jul 2023
    For those getting started, the easiest one click installer I've used is Nomic.ai's gpt4all: https://gpt4all.io/

    This runs with a simple GUI on Windows/Mac/Linux, leverages a fork of llama.cpp on the backend and supports GPU acceleration, and LLaMA, Falcon, MPT, and GPT-J models. It also has API/CLI bindings.

    I just saw a slick new tool https://ollama.ai/ that will let you install a llama2-7b with a single `ollama run llama2` command that has a very simple 1-click installer for Apple Silicon Mac (but need to build from source for anything else atm). It looks like it only supports llamas OOTB but it also seems to use llama.cpp (via Go adapter) on the backend - it seemed to be CPU-only on my MBA, but I didn't poke too much and it's brand new, so we'll see.

    For anyone on HN, they should probably be looking at https://github.com/ggerganov/llama.cpp and https://github.com/ggerganov/ggml directly. If you have a high-end Nvidia consumer card (3090/4090) I'd highly recommend looking into https://github.com/turboderp/exllama

    For those generally confused, the r/LocalLLaMA wiki is a good place to start: https://www.reddit.com/r/LocalLLaMA/wiki/guide/

    I've also been porting my own notes into a single location that tracks models, evals, and has guides focused on local models: https://llm-tracker.info/

  • GPT-4 Details Leaked
    3 projects | news.ycombinator.com | 10 Jul 2023
    Deploying the 60B version is a challenge though and you might need to apply 4-bit quantization with something like https://github.com/PanQiWei/AutoGPTQ or https://github.com/qwopqwop200/GPTQ-for-LLaMa . Then you can improve the inference speed by using https://github.com/turboderp/exllama .

    If you prefer to use an "instruct" model à la ChatGPT (i.e. that does not need few-shot learning to output good results) you can use something like this: https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...

  • Multi-GPU questions
    1 project | /r/LocalLLaMA | 9 Jul 2023
    Exllama for example uses buffers on each card that reduce the amount of VRAM available for model and context, see here. https://github.com/turboderp/exllama/issues/121
  • A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
    5 projects | /r/LocalLLaMA | 7 Jul 2023
    For inference step, this repo can help you to use ExLlama to perform inference on an evaluation dataset for the best throughput.
  • GPT-4 API general availability
    15 projects | news.ycombinator.com | 6 Jul 2023
    In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.

    You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.

    That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).

    I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)

    [1] https://github.com/turboderp/exllama#results-so-far

    [2] https://github.com/aigoopy/llm-jeopardy

    [3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...

    [4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder

  • Local LLMs GPUs
    2 projects | /r/LocalLLaMA | 4 Jul 2023
    That's a 16GB GPU, you should be able to fit 13B at 4bit: https://github.com/turboderp/exllama

What are some alternatives?

When comparing ROCm and exllama you can also consider the following projects:

tensorflow-directml - Fork of TensorFlow accelerated by DirectML

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

llama.cpp - LLM inference in C/C++

rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform

GPTQ-for-LLaMa - 4 bits quantization of LLaMa using GPTQ

oneAPI.jl - Julia support for the oneAPI programming toolkit.

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

SHARK - SHARK - High Performance Machine Learning Distribution

KoboldAI

text-generation-inference - Large Language Model Text Generation Inference