ROCm VS ROCm-OpenCL-Runtime

Compare ROCm vs ROCm-OpenCL-Runtime and see what are their differences.

ROCm

AMD ROCmâ„¢ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm] (by RadeonOpenCompute)

ROCm-OpenCL-Runtime

ROCm OpenOpenCL Runtime (by ROCm)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
ROCm ROCm-OpenCL-Runtime
198 15
3,637 171
- -
0.0 0.0
4 months ago 2 months ago
Python C++
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ROCm

Posts with mentions or reviews of ROCm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-06.

ROCm-OpenCL-Runtime

Posts with mentions or reviews of ROCm-OpenCL-Runtime. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-26.
  • ROCm Is AMD's #1 Priority, Executive Says
    5 projects | news.ycombinator.com | 26 Sep 2023
    Its not that they're supporting buggy code, they just downgraded the quality of their implementation significantly. They made the compiler a lot worse when they swapped to rocm

    https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/iss... is the tracking issue for it filed a year ago, which appears to be wontfix largely because its a lot of work

    OpenCL still unfortunately supports quite a few things that vulkan doesn't, which makes swapping away very difficult for some use cases

  • rocm-opencl (rocm-opencl-runtime) rx 6600 xt support
    3 projects | /r/Fedora | 3 Jun 2023
    There's https://docs.amd.com/bundle/ROCm-Installation_FAQ/page/Frequently_Asked_Questions.html which leads to a page which doesn't list any gpus that I can see, there's https://rocm.docs.amd.com/en/latest/release/gpu_os_support.html which lists "RDNA2" , there's https://github.com/RadeonOpenCompute/ROCm/issues/1698 which is from last year and mentions changing an env for the RX 6600 XT (navi 23) . Not a lot is mentioned in the readme of https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime .
  • Install ROCm Fedora 38
    2 projects | /r/Fedora | 3 May 2023
    $ dnf info rocm-opencl Installed Packages Name : rocm-opencl Version : 5.4.3 Release : 2.fc38 Architecture : x86_64 Size : 1.7 M Source : rocm-opencl-5.4.3-2.fc38.src.rpm Repository : @System From repo : updates Summary : ROCm OpenCL Runtime URL : https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime License : MIT Description : ROCm OpenCL language runtime. : Supports offline and in-process/in-memory compilation.
  • First time in 2 years I was able to get Blender running with an AMD GPU on Linux!
    3 projects | /r/Amd | 3 Jun 2022
    Eg this bug about shared cl/gl textures with mipmaps being broken has now breached its first birthday, without even acknowledgement - basic cl/gl functionality here. This bug took a year for a fix to make its way into a public driver. And this fairly performance critical bug is just "wontfix", and also a significant downgrade from their old driver stack
  • So far I'm unconvinced a 34MB binary blob is more free than OpenZFS.
    3 projects | /r/linux | 12 May 2022
    Its definitely workable if you're willing to put in the effort (except for things that are straight up broken in some cases, like device side enqueue), but there are some issues that require.. fairly major workarounds
  • New NVIDIA Open-Source Linux Kernel Graphics Driver Appears
    2 projects | /r/linux_gaming | 8 Apr 2022
    Their implementation is here: https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime
  • C++ Show and Tell - April 2022
    29 projects | /r/cpp | 3 Apr 2022
    After a lot of moderately annoyed testing, I discovered that the AMD OpenCL implementation is.. rather dumb. If any two kernels share any arguments arguments, it inserts a command barrier between the two, hard-stalling the GPU. After filing a bug, it turns out this is wontfix as well, which is doubly bad. There's no set of flags in OpenCL that you can use to fix this either
  • [TPU] AMD ROCm 4.5 Drops "Polaris" Architecture Support
    3 projects | /r/Amd | 11 Nov 2021
    What's particularly bizarre, is that with one bug report I filed, they claim to have fixed it internally in april, but.. no public driver has ever been released with the fix. For 7 months? Which is just a bizarre software development process
  • Who is to blame for the bad OpenCL Performance? Blender or AMD?
    3 projects | /r/Amd | 19 Apr 2021
    Could you report these issues here https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime? Regarding the device side enqueue issue, could you attach a simple test case to the issue that reproduces the crash? The current pastebin link doesn't give enough info.
    3 projects | /r/Amd | 19 Apr 2021

What are some alternatives?

When comparing ROCm and ROCm-OpenCL-Runtime you can also consider the following projects:

tensorflow-directml - Fork of TensorFlow accelerated by DirectML

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform

oneAPI.jl - Julia support for the oneAPI programming toolkit.

SHARK - SHARK - High Performance Machine Learning Distribution

plaidml - PlaidML is a framework for making deep learning work everywhere.

llama.cpp - LLM inference in C/C++

exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.

tensorflow-upstream - TensorFlow ROCm port

AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!

kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.

server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.