Why is AMD leaving ML to nVidia?

This page summarizes the projects mentioned and recommended in the original post on /r/Amd

Our great sponsors
  • InfluxDB - Collect and Analyze Billions of Data Points in Real Time
  • Onboard AI - Learn any GitHub repo in 59 seconds
  • SaaSHub - Software Alternatives and Reviews
  • ROCm

    AMD ROCm™ Software - GitHub Home

    However ROCm support is Linux only, and will never be ported to Windows. For that, your best hope there would be PyTorch for DirectML... Also, there's no RDNA3 support until ROCm 5.5 (finally?). RDNA is always treated as the red-headed stepchild, so that basically entirely cedes the non-datacenter market. No hobbiests, academics, startups, or anyone else scrappy from starting small and moving up. Without any enthusiasts playing around w/ AMD cards, it's a vicious cycle of no one using it, since there's no community...

  • SHARK

    SHARK - High Performance Machine Learning Distribution

    One other thing to look into is SHARK. Apparently SHARK LLaMA is possible, for example: https://github.com/nod-ai/SHARK/tree/main/shark/examples/shark_inference/llama

  • InfluxDB

    Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.

  • HIP

    HIP: C++ Heterogeneous-Compute Interface for Portability

    Still: https://github.com/ROCm-Developer-Tools/HIP and https://github.com/RadeonOpenCompute/ROCm are the ways through which eventually they'll get there.

  • AMDGPU.jl

    AMD GPU (ROCm) programming in Julia

    For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.

  • KernelAbstractions.jl

    Heterogeneous programming in Julia

    For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.

  • stable-diffusion-webui

    Stable Diffusion web UI

    However ROCm support is Linux only, and will never be ported to Windows. For that, your best hope there would be PyTorch for DirectML... Also, there's no RDNA3 support until ROCm 5.5 (finally?). RDNA is always treated as the red-headed stepchild, so that basically entirely cedes the non-datacenter market. No hobbiests, academics, startups, or anyone else scrappy from starting small and moving up. Without any enthusiasts playing around w/ AMD cards, it's a vicious cycle of no one using it, since there's no community...

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts