Our great sponsors
-
DirectML
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
-
wonnx
A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
It's really shocking that AMD fails to extend support natively.
Workarounds such as DirectML claim to be the answer in unifying people with NVIDIA or AMD GPUs, but thus far it hasn't, with issues such as [this](https://github.com/microsoft/DirectML/issues/58) constantly popping up.
As nicolaslem points out, Arch does have community packages for ROCm, but that, unsurprisingly fails to lend support to many consumer GPUs. The best community support I have come across are [rocm-opencl](https://copr.fedorainfracloud.org/coprs/mystro256/rocm-openc... [rocm-hip](https://copr.fedorainfracloud.org/coprs/mystro256/rocm-hip/) for Fedora maintained by [mystro256](https://github.com/Mystro256), who is a single AMD employee.Thanks to him, my AMD GPU (Radeon 6800XT) hasn't completely gone to waste, and I was able to tinker with some things (Gaming isn't really up my alley).
Lately however, after beginning to work on DGX V100s and A100s, and using my older laptop with a GTX 1650, it was apparent how simple setting up CUDA was, and how easily I could experiment with it on my consumer card. Many have spoken about similar stories, and here's mine. Really hope AMD does a whole lot more, and doesn't exclusively keep their powerful GPUs for gaming.
I know this framework https://github.com/webonnx/wonnx, but I’ve never used that.