Why is AMD leaving ML to nVidia?

This page summarizes the projects mentioned and recommended in the original post on /r/Amd

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • ROCm

    Discontinued AMD ROCmâ„¢ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]

  • However ROCm support is Linux only, and will never be ported to Windows. For that, your best hope there would be PyTorch for DirectML... Also, there's no RDNA3 support until ROCm 5.5 (finally?). RDNA is always treated as the red-headed stepchild, so that basically entirely cedes the non-datacenter market. No hobbiests, academics, startups, or anyone else scrappy from starting small and moving up. Without any enthusiasts playing around w/ AMD cards, it's a vicious cycle of no one using it, since there's no community...

  • SHARK

    SHARK - High Performance Machine Learning Distribution

  • One other thing to look into is SHARK. Apparently SHARK LLaMA is possible, for example: https://github.com/nod-ai/SHARK/tree/main/shark/examples/shark_inference/llama

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • HIP

    HIP: C++ Heterogeneous-Compute Interface for Portability

  • Still: https://github.com/ROCm-Developer-Tools/HIP and https://github.com/RadeonOpenCompute/ROCm are the ways through which eventually they'll get there.

  • AMDGPU.jl

    AMD GPU (ROCm) programming in Julia

  • For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.

  • KernelAbstractions.jl

    Heterogeneous programming in Julia

  • For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.

  • stable-diffusion-webui

    Stable Diffusion web UI

  • However ROCm support is Linux only, and will never be ported to Windows. For that, your best hope there would be PyTorch for DirectML... Also, there's no RDNA3 support until ROCm 5.5 (finally?). RDNA is always treated as the red-headed stepchild, so that basically entirely cedes the non-datacenter market. No hobbiests, academics, startups, or anyone else scrappy from starting small and moving up. Without any enthusiasts playing around w/ AMD cards, it's a vicious cycle of no one using it, since there's no community...

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts