vllm VS ROCm

Compare vllm vs ROCm and see what are their differences.

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs (by vllm-project)

ROCm

AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm] (by RadeonOpenCompute)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
vllm ROCm
31 198
19,344 3,637
12.6% -
9.9 0.0
1 day ago 5 months ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

vllm

Posts with mentions or reviews of vllm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-09.
  • AI leaderboards are no longer useful. It's time to switch to Pareto curves
    1 project | news.ycombinator.com | 30 Apr 2024
    I guess the root cause of my claim is that OpenAI won't tell us whether or not GPT-3.5 is an MoE model, and I assumed it wasn't. Since GPT-3.5 is clearly nondeterministic at temp=0, I believed the nondeterminism was due to FPU stuff, and this effect was amplified with GPT-4's MoE. But if GPT-3.5 is also MoE then that's just wrong.

    What makes this especially tricky is that small models are truly 100% deterministic at temp=0 because the relative likelihoods are too coarse for FPU issues to be a factor. I had thought 3.5 was big enough that some of its token probabilities were too fine-grained for the FPU. But that's probably wrong.

    On the other hand, it's not just GPT, there are currently floating-point difficulties in vllm which significantly affect the determinism of any model run on it: https://github.com/vllm-project/vllm/issues/966 Note that a suggested fix is upcasting to float32. So it's possible that GPT-3.5 is using an especially low-precision float and introducing nondeterminism by saving money on compute costs.

    Sadly I do not have the money[1] to actually run a test to falsify any of this. It seems like this would be a good little research project.

    [1] Or the time, or the motivation :) But this stuff is expensive.

  • Mistral AI Launches New 8x22B Moe Model
    4 projects | news.ycombinator.com | 9 Apr 2024
    The easiest is to use vllm (https://github.com/vllm-project/vllm) to run it on a Couple of A100's, and you can benchmark this using this library (https://github.com/EleutherAI/lm-evaluation-harness)
  • FLaNK AI for 11 March 2024
    46 projects | dev.to | 11 Mar 2024
  • Show HN: We got fine-tuning Mistral-7B to not suck
    4 projects | news.ycombinator.com | 7 Feb 2024
    Great question! scheduling workloads onto GPUs in a way where VRAM is being utilised efficiently was quite the challenge.

    What we found was the IO latency for loading model weights into VRAM will kill responsiveness if you don't "re-use" sessions (i.e. where the model weights remain loaded and you run multiple inference sessions over the same loaded weights).

    Obviously projects like https://github.com/vllm-project/vllm exist but we needed to build out a scheduler that can run a fleet of GPUs for a matrix of text/image vs inference/finetune sessions.

    disclaimer: I work on Helix

  • Mistral CEO confirms 'leak' of new open source AI model nearing GPT4 performance
    5 projects | news.ycombinator.com | 31 Jan 2024
    FYI, vLLM also just added experiment multi-lora support: https://github.com/vllm-project/vllm/releases/tag/v0.3.0

    Also check out the new prefix caching, I see huge potential for batch processing purposes there!

  • VLLM Sacrifices Accuracy for Speed
    1 project | news.ycombinator.com | 23 Jan 2024
  • Easy, fast, and cheap LLM serving for everyone
    1 project | news.ycombinator.com | 17 Dec 2023
  • vllm
    1 project | news.ycombinator.com | 15 Dec 2023
  • Mixtral Expert Parallelism
    1 project | news.ycombinator.com | 15 Dec 2023
  • Mixtral 8x7B Support
    1 project | news.ycombinator.com | 11 Dec 2023

ROCm

Posts with mentions or reviews of ROCm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-06.
  • AMD May Get Across the CUDA Moat
    8 projects | news.ycombinator.com | 6 Oct 2023
    Yep, did exactly that. IMO he threw a fit, even though AMD was working with him squashing bugs. https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...
  • ROCm 5.7.0 Release
    1 project | /r/ROCm | 26 Sep 2023
  • ROCm Is AMD's #1 Priority, Executive Says
    5 projects | news.ycombinator.com | 26 Sep 2023
    Ok, I wonder what's wrong. maybe it's this? https://stackoverflow.com/questions/4959621/error-1001-in-cl...

    Nope. Anything about this on the arch wiki? Nope

    This bug report[2] from 2021? Maybe I need to update my groups.

    [2]: https://github.com/RadeonOpenCompute/ROCm/issues/1411

        $ ls -la /dev/kfd
  • Simplifying GPU Application Development with HMM
    2 projects | news.ycombinator.com | 29 Aug 2023
    HMM is, I believe, a Linux feature.

    AMD added HMM support in ROCm 5.0 according to this: https://github.com/RadeonOpenCompute/ROCm/blob/develop/CHANG...

  • AMD Ryzen APU turned into a 16GB VRAM GPU and it can run Stable Diffusion
    3 projects | news.ycombinator.com | 17 Aug 2023
    Woot AMD now supports APU? I sold my notebook as i hit a wall when trying rocm [1] Is there a list oft Wirkung apu's ?

    [1] https://github.com/RadeonOpenCompute/ROCm/issues/1587

  • Nvidia's CUDA Monopoly
    3 projects | news.ycombinator.com | 7 Aug 2023
    Last I heard he's abandoned working with AMD products.

    https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

  • Nvidia H100 GPUs: Supply and Demand
    2 projects | news.ycombinator.com | 1 Aug 2023
    They're talking about the meltdown he had on stream [1] (in front of the mentioned pirate flag), that ended with him saying he'd stop using AMD hardware [2]. He recanted this two weeks after talking with AMD [3].

    Maybe he'll succeed, but this definitely doesn't scream stability to me. I'd be wary of investing money into his ventures (but then I'm not a VC, so what do I know).

    [1] https://www.youtube.com/watch?v=Mr0rWJhv9jU

    [2] https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

    [3] https://twitter.com/realGeorgeHotz/status/166980346408248934...

  • Open or closed source Nvidia driver?
    1 project | /r/linux | 9 Jul 2023
    As for rocm support on consumer devices, AMD wont even clarify what devices are supported. https://github.com/RadeonOpenCompute/ROCm/pull/1738
  • Why Nvidia Keeps Winning: The Rise of an AI Giant
    3 projects | news.ycombinator.com | 6 Jul 2023
    He flamed out, then is back after Lisa Su called him (lmao)

    https://geohot.github.io/blog/jekyll/update/2023/05/24/the-t...

    https://www.youtube.com/watch?v=Mr0rWJhv9jU

    https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

    https://geohot.github.io/blog/jekyll/update/2023/06/07/a-div...

    On a personal level that youtube doesn't make him come off looking that good... like people are trying to get patches to him and generally soothe him/damage control and he's just being a bit of a manchild. And it sounds like that's the general course of events around a lot of his "efforts".

    On the other hand he's not wrong either, having this private build inside AMD and not even validating official, supported configurations for the officially supported non-private builds they show to the world isn't a good look, and that's just the very start of the problems around ROCm. AMD's OpenCL runtime was never stable or good either and every experience I've heard with it was "we spent so much time fighting AMD-specific runtime bugs and specs jank that what we ended up with was essentially vendor-proprietary anyway".

    On the other other hand, it sounds like AMD know this is a mess and has some big stability/maturity improvements in the pipeline. It seems clear from some of the smoke coming out of the building that they're cooking on more general ROCm support for RDNA cards, and generally working to patch the maturity and stability issues he's talking about. I hate the "wait for drivers/new software release bro it's gonna fix everything" that surrounds AMD products but in this case I'm at least hopeful they seem to understand the problem, even if it's completely absurdly late.

    Some of what he was viewing as "the process happening in secret" was likely people doing rush patches on the latest build to accommodate him, and he comes off as berating them over it. Again, like, that stream just comes off as "mercurial manchild" not coding genius. And everyone knew the driver situation is bad, that's why there's notionally alpha for him to realize here in the first place. He's bumping into moneymakers, and getting mad about it.

  • Disable "SetTensor/CopyTensor" console logging.
    2 projects | /r/ROCm | 6 Jul 2023
    I tried to train another model using InceptionResNetV2 and the same issues happens. Also, this happens even using the model.predict() method if using the GPU. Probably this is an issue related to the AMD Radeon RX 6700 XT or some mine misconfiguration. System Inormation: ArchLinux 6.1.32-1-lts - AMD Radeon RX 6700 XT - gfx1031 Opened issues: - https://github.com/RadeonOpenCompute/ROCm/issues/2250 - https://github.com/ROCmSoftwarePlatform/tensorflow-upstream/issues/2125

What are some alternatives?

When comparing vllm and ROCm you can also consider the following projects:

TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

tensorflow-directml - Fork of TensorFlow accelerated by DirectML

CTranslate2 - Fast inference engine for Transformer models

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

lmdeploy - LMDeploy is a toolkit for compressing, deploying, and serving LLMs.

rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform

Llama-2-Onnx

oneAPI.jl - Julia support for the oneAPI programming toolkit.

tritony - Tiny configuration for Triton Inference Server

SHARK - SHARK - High Performance Machine Learning Distribution

faster-whisper - Faster Whisper transcription with CTranslate2

llama.cpp - LLM inference in C/C++