ROCm VS tensorflow-upstream

Compare ROCm vs tensorflow-upstream and see what are their differences.

ROCm

AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm] (by RadeonOpenCompute)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
ROCm tensorflow-upstream
198 12
3,637 674
- 1.0%
0.0 0.0
4 months ago 6 days ago
Python C++
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ROCm

Posts with mentions or reviews of ROCm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-06.

tensorflow-upstream

Posts with mentions or reviews of tensorflow-upstream. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-06.
  • Disable "SetTensor/CopyTensor" console logging.
    2 projects | /r/ROCm | 6 Jul 2023
    I tried to train another model using InceptionResNetV2 and the same issues happens. Also, this happens even using the model.predict() method if using the GPU. Probably this is an issue related to the AMD Radeon RX 6700 XT or some mine misconfiguration. System Inormation: ArchLinux 6.1.32-1-lts - AMD Radeon RX 6700 XT - gfx1031 Opened issues: - https://github.com/RadeonOpenCompute/ROCm/issues/2250 - https://github.com/ROCmSoftwarePlatform/tensorflow-upstream/issues/2125
  • New NVIDIA Open-Source Linux Kernel Graphics Driver Appears
    2 projects | /r/linux_gaming | 8 Apr 2022
    I mean, tensorflow has a fork with ROCm support which is maintained by AMD https://github.com/ROCmSoftwarePlatform/tensorflow-upstream although I'm not entirely sure what you're AI workloads are specifically, I'm just throwing out tensorflow because it's popular. On the enterprise side they also have radeon instinct MI, although I assume you're probably not using enterprise HW but I wanted to throw it out there anyway.
  • AMD on the Brink of Taking Over the GPU Market for Linux Gamers (Q2 2021 Survey Results)
    2 projects | /r/linux_gaming | 6 Jun 2021
    The repo exists: https://github.com/ROCmSoftwarePlatform/tensorflow-upstream
  • Tensorflow with Radeon GPU
    2 projects | /r/archlinux | 5 May 2021
    https://github.com/ROCmSoftwarePlatform/tensorflow-upstream#tensorflow-rocm-port
  • Which version of ROCm and Tensorflow should I use?
    3 projects | /r/ROCm | 22 Mar 2021
    I tried some case ROCm & Tensorflow-rocm as is on this page( tensorflow-upstream/tensorflow-rocm-release.md at develop-upstream · ROCmSoftwarePlatform/tensorflow-upstream · GitHub ), but I failed to run a simple CNN model with fashion-mnist datasets.
  • Long-term value play for AMD
    4 projects | /r/wallstreetbets | 6 Mar 2021
    However, the binary PyPI packages are still distributed by AMD (and as of writing a point release behind the current upstream version) so it feels a bit like a second-class citizen right now.

What are some alternatives?

When comparing ROCm and tensorflow-upstream you can also consider the following projects:

tensorflow-directml - Fork of TensorFlow accelerated by DirectML

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform

oneAPI.jl - Julia support for the oneAPI programming toolkit.

SHARK - SHARK - High Performance Machine Learning Distribution

plaidml - PlaidML is a framework for making deep learning work everywhere.

llama.cpp - LLM inference in C/C++

exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.

ROCm-OpenCL-Runtime - ROCm OpenOpenCL Runtime

AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!

kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.

server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.