ROCm VS llama.cpp

Compare ROCm vs llama.cpp and see what are their differences.

ROCm

AMD ROCmâ„¢ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm] (by RadeonOpenCompute)

llama.cpp

LLM inference in C/C++ (by ggerganov)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
ROCm llama.cpp
198 769
3,637 55,846
- -
0.0 10.0
5 months ago 7 days ago
Python C++
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ROCm

Posts with mentions or reviews of ROCm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-06.
  • AMD May Get Across the CUDA Moat
    8 projects | news.ycombinator.com | 6 Oct 2023
    Yep, did exactly that. IMO he threw a fit, even though AMD was working with him squashing bugs. https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...
  • ROCm 5.7.0 Release
    1 project | /r/ROCm | 26 Sep 2023
  • ROCm Is AMD's #1 Priority, Executive Says
    5 projects | news.ycombinator.com | 26 Sep 2023
    Ok, I wonder what's wrong. maybe it's this? https://stackoverflow.com/questions/4959621/error-1001-in-cl...

    Nope. Anything about this on the arch wiki? Nope

    This bug report[2] from 2021? Maybe I need to update my groups.

    [2]: https://github.com/RadeonOpenCompute/ROCm/issues/1411

        $ ls -la /dev/kfd
  • Simplifying GPU Application Development with HMM
    2 projects | news.ycombinator.com | 29 Aug 2023
    HMM is, I believe, a Linux feature.

    AMD added HMM support in ROCm 5.0 according to this: https://github.com/RadeonOpenCompute/ROCm/blob/develop/CHANG...

  • AMD Ryzen APU turned into a 16GB VRAM GPU and it can run Stable Diffusion
    3 projects | news.ycombinator.com | 17 Aug 2023
    Woot AMD now supports APU? I sold my notebook as i hit a wall when trying rocm [1] Is there a list oft Wirkung apu's ?

    [1] https://github.com/RadeonOpenCompute/ROCm/issues/1587

  • Nvidia's CUDA Monopoly
    3 projects | news.ycombinator.com | 7 Aug 2023
    Last I heard he's abandoned working with AMD products.

    https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

  • Nvidia H100 GPUs: Supply and Demand
    2 projects | news.ycombinator.com | 1 Aug 2023
    They're talking about the meltdown he had on stream [1] (in front of the mentioned pirate flag), that ended with him saying he'd stop using AMD hardware [2]. He recanted this two weeks after talking with AMD [3].

    Maybe he'll succeed, but this definitely doesn't scream stability to me. I'd be wary of investing money into his ventures (but then I'm not a VC, so what do I know).

    [1] https://www.youtube.com/watch?v=Mr0rWJhv9jU

    [2] https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

    [3] https://twitter.com/realGeorgeHotz/status/166980346408248934...

  • Open or closed source Nvidia driver?
    1 project | /r/linux | 9 Jul 2023
    As for rocm support on consumer devices, AMD wont even clarify what devices are supported. https://github.com/RadeonOpenCompute/ROCm/pull/1738
  • Why Nvidia Keeps Winning: The Rise of an AI Giant
    3 projects | news.ycombinator.com | 6 Jul 2023
    He flamed out, then is back after Lisa Su called him (lmao)

    https://geohot.github.io/blog/jekyll/update/2023/05/24/the-t...

    https://www.youtube.com/watch?v=Mr0rWJhv9jU

    https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

    https://geohot.github.io/blog/jekyll/update/2023/06/07/a-div...

    On a personal level that youtube doesn't make him come off looking that good... like people are trying to get patches to him and generally soothe him/damage control and he's just being a bit of a manchild. And it sounds like that's the general course of events around a lot of his "efforts".

    On the other hand he's not wrong either, having this private build inside AMD and not even validating official, supported configurations for the officially supported non-private builds they show to the world isn't a good look, and that's just the very start of the problems around ROCm. AMD's OpenCL runtime was never stable or good either and every experience I've heard with it was "we spent so much time fighting AMD-specific runtime bugs and specs jank that what we ended up with was essentially vendor-proprietary anyway".

    On the other other hand, it sounds like AMD know this is a mess and has some big stability/maturity improvements in the pipeline. It seems clear from some of the smoke coming out of the building that they're cooking on more general ROCm support for RDNA cards, and generally working to patch the maturity and stability issues he's talking about. I hate the "wait for drivers/new software release bro it's gonna fix everything" that surrounds AMD products but in this case I'm at least hopeful they seem to understand the problem, even if it's completely absurdly late.

    Some of what he was viewing as "the process happening in secret" was likely people doing rush patches on the latest build to accommodate him, and he comes off as berating them over it. Again, like, that stream just comes off as "mercurial manchild" not coding genius. And everyone knew the driver situation is bad, that's why there's notionally alpha for him to realize here in the first place. He's bumping into moneymakers, and getting mad about it.

  • Disable "SetTensor/CopyTensor" console logging.
    2 projects | /r/ROCm | 6 Jul 2023
    I tried to train another model using InceptionResNetV2 and the same issues happens. Also, this happens even using the model.predict() method if using the GPU. Probably this is an issue related to the AMD Radeon RX 6700 XT or some mine misconfiguration. System Inormation: ArchLinux 6.1.32-1-lts - AMD Radeon RX 6700 XT - gfx1031 Opened issues: - https://github.com/RadeonOpenCompute/ROCm/issues/2250 - https://github.com/ROCmSoftwarePlatform/tensorflow-upstream/issues/2125

llama.cpp

Posts with mentions or reviews of llama.cpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-21.
  • Phi-3 Weights Released
    1 project | news.ycombinator.com | 23 Apr 2024
    well https://github.com/ggerganov/llama.cpp/issues/6849
  • Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
    3 projects | news.ycombinator.com | 21 Apr 2024
  • Llama.cpp Working on Support for Llama3
    1 project | news.ycombinator.com | 18 Apr 2024
  • Embeddings are a good starting point for the AI curious app developer
    7 projects | news.ycombinator.com | 17 Apr 2024
    Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)

    Running an embedding server locally is pretty straightforward:

    - Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases

  • Mixtral 8x22B
    4 projects | news.ycombinator.com | 17 Apr 2024
  • Llama.cpp: Improve CPU prompt eval speed
    1 project | news.ycombinator.com | 17 Apr 2024
  • Ollama 0.1.32: WizardLM 2, Mixtral 8x22B, macOS CPU/GPU model split
    9 projects | news.ycombinator.com | 17 Apr 2024
    Ah, thanks for this! I can't edit my parent comment that you replied to any longer unfortunately.

    As I said, I only compared the contributors graphs [0] and checked for overlaps. But those apparently only go back about year and only list at most 100 contributors ranked by number of commits.

    [0]: https://github.com/ollama/ollama/graphs/contributors and https://github.com/ggerganov/llama.cpp/graphs/contributors

  • KodiBot - Local Chatbot App for Desktop
    2 projects | dev.to | 11 Apr 2024
    KodiBot is a desktop app that enables users to run their own AI chat assistants locally and offline on Windows, Mac, and Linux operating systems. KodiBot is a standalone app and does not require an internet connection or additional dependencies to run local chat assistants. It supports both Llama.cpp compatible models and OpenAI API.
  • Mixture-of-Depths: Dynamically allocating compute in transformers
    3 projects | news.ycombinator.com | 8 Apr 2024
    There are already some implementations out there which attempt to accomplish this!

    Here's an example: https://github.com/silphendio/sliced_llama

    A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...

    Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275

    And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...

  • The lifecycle of a code AI completion
    6 projects | news.ycombinator.com | 7 Apr 2024
    For those who might not be aware of this, there is also an open source project on GitHub called "Twinny" which is an offline Visual Studio Code plugin equivalent to Copilot: https://github.com/rjmacarthy/twinny

    It can be used with a number of local model services. Currently for my setup on a NVIDIA 4090, I'm running both the base and instruct model for deepseek-coder 6.7b using 5_K_M Quantization GGUF files (for performance) through llama.cpp "server" where the base model is for completions and the instruct model for chat interactions.

    llama.cpp: https://github.com/ggerganov/llama.cpp/

    deepseek-coder 6.7b base GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-GGU...

    deepseek-coder 6.7b instruct GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct...

What are some alternatives?

When comparing ROCm and llama.cpp you can also consider the following projects:

tensorflow-directml - Fork of TensorFlow accelerated by DirectML

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

gpt4all - gpt4all: run open-source LLMs anywhere

rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

oneAPI.jl - Julia support for the oneAPI programming toolkit.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

SHARK - SHARK - High Performance Machine Learning Distribution

ggml - Tensor library for machine learning

plaidml - PlaidML is a framework for making deep learning work everywhere.

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM