ROCm Is AMD's #1 Priority, Executive Says

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
  • ROCm

    Discontinued AMD ROCmâ„¢ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]

  • Ok, I wonder what's wrong. maybe it's this? https://stackoverflow.com/questions/4959621/error-1001-in-cl...

    Nope. Anything about this on the arch wiki? Nope

    This bug report[2] from 2021? Maybe I need to update my groups.

    [2]: https://github.com/RadeonOpenCompute/ROCm/issues/1411

        $ ls -la /dev/kfd

  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • ROCm-OpenCL-Runtime

    Discontinued ROCm OpenOpenCL Runtime

  • Its not that they're supporting buggy code, they just downgraded the quality of their implementation significantly. They made the compiler a lot worse when they swapped to rocm

    https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/iss... is the tracking issue for it filed a year ago, which appears to be wontfix largely because its a lot of work

    OpenCL still unfortunately supports quite a few things that vulkan doesn't, which makes swapping away very difficult for some use cases

  • mlc-llm

    Universal LLM Deployment Engine with ML Compilation

  • One of your problems might be that gfx1032 is not supported by AMD's ROCm packages, which has a laughably short list of supported hardware: https://rocm.docs.amd.com/en/latest/release/gpu_os_support.h...

    The normal workaround is to assign the closest architecture, eg gfx1030, so `HSA_OVERRIDE_GFX_VERSION=10.3.0` might help

    Also, it looks like some of your tested projects are OpenCL? For me, I do something like: `yay -S rocm-hip-sdk rocm-ml-sdk rocm-opencl-sdk` to cover all the bases.

    My recent interest has been LLMs and this is my general step by step for those (llama.cpp, exllama) for those interested: https://llm-tracker.info/books/howto-guides/page/amd-gpus

    I didn't port the docs back in, but also here's a step-by-step w/ my adventures getting TVM/MLC working w/ an APU: https://github.com/mlc-ai/mlc-llm/issues/787

    From my experience, ROCm is improving, but there's a good reason that Nvidia has 90% market share even at big price premiums.

  • rocm

    Ebuilds to install ROCM on Gentoo Linux (by justxi)

  • Support for Gentoo existed for a long time in https://github.com/justxi/rocm before being merged in the main Portage tree.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Ai on a android phone?

    2 projects | /r/LocalLLaMA | 8 Dec 2023
  • MLC vs llama.cpp

    2 projects | /r/LocalLLaMA | 7 Nov 2023
  • [Project] Scaling LLama2 70B with Multi NVIDIA and AMD GPUs under 3k budget

    1 project | /r/LocalLLaMA | 21 Oct 2023
  • Scaling LLama2-70B with Multi Nvidia/AMD GPU

    2 projects | news.ycombinator.com | 19 Oct 2023
  • Ask HN: Are you training and running custom LLMs and how are you doing it?

    1 project | news.ycombinator.com | 14 Aug 2023