AMD ROCm™ Software - GitHub Home
Ok, I wonder what's wrong. maybe it's this? https://stackoverflow.com/questions/4959621/error-1001-in-cl...
Nope. Anything about this on the arch wiki? Nope
This bug report from 2021? Maybe I need to update my groups.
$ ls -la /dev/kfd
ROCm OpenOpenCL Runtime
Its not that they're supporting buggy code, they just downgraded the quality of their implementation significantly. They made the compiler a lot worse when they swapped to rocm
https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/iss... is the tracking issue for it filed a year ago, which appears to be wontfix largely because its a lot of work
OpenCL still unfortunately supports quite a few things that vulkan doesn't, which makes swapping away very difficult for some use cases
Learn any GitHub repo in 59 seconds. Onboard AI learns any GitHub repo in minutes and lets you chat with it to locate functionality, understand different parts, and generate new code. Use it for free at www.getonboard.dev.
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
One of your problems might be that gfx1032 is not supported by AMD's ROCm packages, which has a laughably short list of supported hardware: https://rocm.docs.amd.com/en/latest/release/gpu_os_support.h...
The normal workaround is to assign the closest architecture, eg gfx1030, so `HSA_OVERRIDE_GFX_VERSION=10.3.0` might help
Also, it looks like some of your tested projects are OpenCL? For me, I do something like: `yay -S rocm-hip-sdk rocm-ml-sdk rocm-opencl-sdk` to cover all the bases.
My recent interest has been LLMs and this is my general step by step for those (llama.cpp, exllama) for those interested: https://llm-tracker.info/books/howto-guides/page/amd-gpus
I didn't port the docs back in, but also here's a step-by-step w/ my adventures getting TVM/MLC working w/ an APU: https://github.com/mlc-ai/mlc-llm/issues/787
From my experience, ROCm is improving, but there's a good reason that Nvidia has 90% market share even at big price premiums.
Ebuilds to install ROCM on Gentoo Linux (by justxi)
Support for Gentoo existed for a long time in https://github.com/justxi/rocm before being merged in the main Portage tree.
Ai on a android phone?
2 projects | /r/LocalLLaMA | 8 Dec 2023
MLC vs llama.cpp
2 projects | /r/LocalLLaMA | 7 Nov 2023
[Project] Scaling LLama2 70B with Multi NVIDIA and AMD GPUs under 3k budget
1 project | /r/LocalLLaMA | 21 Oct 2023
Scaling LLama2-70B with Multi Nvidia/AMD GPU
2 projects | news.ycombinator.com | 19 Oct 2023
Ask HN: Are you training and running custom LLMs and how are you doing it?
1 project | news.ycombinator.com | 14 Aug 2023