AMD May Get Across the CUDA Moat

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Stream - Scalable APIs for Chat, Feeds, Moderation, & Video.
Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
getstream.io
featured
InfluxDB – Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com
featured
  1. CTranslate2

    Fast inference engine for Transformer models

    I wouldn’t say ROCm code is “slower”, per se, but in practice that’s how it presents. References:

    https://github.com/InternLM/lmdeploy

    https://github.com/vllm-project/vllm

    https://github.com/OpenNMT/CTranslate2

    You know what’s missing from all of these and many more like them? Support for ROCm. This is all before you get to the really wildly performant stuff like Triton Inference Server, FasterTransformer, TensorRT-LLM, etc.

    ROCm is at the “get it to work stage” (see top comment, blog posts everywhere celebrating minor successes, etc). CUDA is at the “wring every last penny of performance out of this thing” stage.

    In terms of hardware support, I think that one is obvious. The U in CUDA originally stood for unified. Look at the list of chips supported by Nvidia drivers and CUDA releases. Literally anything from at least the past 10 years that has Nvidia printed on the box will just run CUDA code.

    One of my projects specifically targets Pascal up - when I thought even Pascal was a stretch. Cue my surprise when I got a report of someone casually firing it up on Maxwell when I was pretty certain there was no way it could work.

    A Maxwell laptop chip. It also runs just as well on an H100.

    THAT is hardware support.

  2. Stream

    Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.

    Stream logo
  3. ROCm

    Discontinued AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]

    Yep, did exactly that. IMO he threw a fit, even though AMD was working with him squashing bugs. https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

  4. AdaptiveCpp

    Compiler for multiple programming models (SYCL, C++ standard parallelism, HIP/CUDA) for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!

    Not natively, but AdaptiveCpp (previously hiSycl, then OpenSycl) has a single source single compiler pass, where they basically store LLVM IR as an intermediate representation.

    https://github.com/AdaptiveCpp/AdaptiveCpp/blob/develop/doc/...

    Performance penalty was within ew precents, at least according to the paper (figure 9 and 10)

  5. mlc-llm

    Universal LLM Deployment Engine with ML Compilation

    For LLM inference, a shoutout to MLC LLM, which runs LLM models on basically any API that's widely available: https://github.com/mlc-ai/mlc-llm

  6. lmdeploy

    LMDeploy is a toolkit for compressing, deploying, and serving LLMs.

    I wouldn’t say ROCm code is “slower”, per se, but in practice that’s how it presents. References:

    https://github.com/InternLM/lmdeploy

    https://github.com/vllm-project/vllm

    https://github.com/OpenNMT/CTranslate2

    You know what’s missing from all of these and many more like them? Support for ROCm. This is all before you get to the really wildly performant stuff like Triton Inference Server, FasterTransformer, TensorRT-LLM, etc.

    ROCm is at the “get it to work stage” (see top comment, blog posts everywhere celebrating minor successes, etc). CUDA is at the “wring every last penny of performance out of this thing” stage.

    In terms of hardware support, I think that one is obvious. The U in CUDA originally stood for unified. Look at the list of chips supported by Nvidia drivers and CUDA releases. Literally anything from at least the past 10 years that has Nvidia printed on the box will just run CUDA code.

    One of my projects specifically targets Pascal up - when I thought even Pascal was a stretch. Cue my surprise when I got a report of someone casually firing it up on Maxwell when I was pretty certain there was no way it could work.

    A Maxwell laptop chip. It also runs just as well on an H100.

    THAT is hardware support.

  7. vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    I wouldn’t say ROCm code is “slower”, per se, but in practice that’s how it presents. References:

    https://github.com/InternLM/lmdeploy

    https://github.com/vllm-project/vllm

    https://github.com/OpenNMT/CTranslate2

    You know what’s missing from all of these and many more like them? Support for ROCm. This is all before you get to the really wildly performant stuff like Triton Inference Server, FasterTransformer, TensorRT-LLM, etc.

    ROCm is at the “get it to work stage” (see top comment, blog posts everywhere celebrating minor successes, etc). CUDA is at the “wring every last penny of performance out of this thing” stage.

    In terms of hardware support, I think that one is obvious. The U in CUDA originally stood for unified. Look at the list of chips supported by Nvidia drivers and CUDA releases. Literally anything from at least the past 10 years that has Nvidia printed on the box will just run CUDA code.

    One of my projects specifically targets Pascal up - when I thought even Pascal was a stretch. Cue my surprise when I got a report of someone casually firing it up on Maxwell when I was pretty certain there was no way it could work.

    A Maxwell laptop chip. It also runs just as well on an H100.

    THAT is hardware support.

  8. faster-whisper

    Faster Whisper transcription with CTranslate2

  9. InfluxDB

    InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Lossless LLM 3x Throughput Increase by LMCache

    2 projects | news.ycombinator.com | 24 Jun 2025
  • Why DeepSeek is cheap at scale but expensive to run locally

    6 projects | news.ycombinator.com | 1 Jun 2025
  • Bringing Function Calling to DeepSeek Models on SGLang

    1 project | dev.to | 23 Apr 2025
  • SGLang DeepSeek V3 Support with Collab with DeepSeek Team (Nvidia or AMD)

    1 project | news.ycombinator.com | 6 Feb 2025
  • RWKV.cpp is now being deployed with the latest Windows 11 system

    1 project | news.ycombinator.com | 4 Sep 2024

Did you know that Python is
the 2nd most popular programming language
based on number of references?