-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
HIPIFY
Discontinued HIPIFY: Convert CUDA to Portable C++ Code [Moved to: https://github.com/ROCm/HIPIFY] (by ROCm-Developer-Tools)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Short answer no. Long answer "in theory" yes. I tried this [1] but gave up as building rocm + deps takes up to 6h :/ Official statement [2]
[1] https://github.com/xuhuisheng/rocm-build
For anyone on Arch, there is a third-party repository called arch4edu[0] that provides up to date builds of ROCm and its dependencies. On my iGPU, OpenCL sometimes work, sometimes crashes. Even finding a list of supported hardware is close to impossible. The whole situation is just ridiculous and makes AMD look bad.
[0] https://github.com/arch4edu/arch4edu
> Thus, the idea is that through typically negligible effort porting to HiP, your code becomes vendor-independent.
Here, the big AMD mistake was to rename those function prefixes in the first place. It's a mistake that they could have avoided...
What a lot of SW codebases did to support AMD (see PyTorch code notably): codebase is still CUDA, have the conversion pass to HIP done at build time.
See https://github.com/ROCm-Developer-Tools/HIPIFY/blob/amd-stag... for the Perl script to do it.
Then comes the problem of AMD not supporting ROCm HIP on most of their hardware or user base.
On Windows, the ROCm HIP SDK is private and only available under NDA. This means that while you can use Blender w/ HIP on Windows, the Blender builds that you compile yourself will not be able to use ROCm HIP.
On Linux, the supported GPUs are few and far between, Vega20 onwards are supported today. APUs, RDNA1, and lower end RDNA2 w/o unsupported hacks (6700 XT and below) are excluded.
Here are a list of potential issues https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...
That said we (Nod.ai team) will add support for xformers soon so you can opt in for xformers anyway.
Related posts
-
CUDA Is Still a Giant Moat for Nvidia
-
Nvidia hits $2T valuation as AI frenzy grips Wall Street
-
Hipify automatically translates CUDA source code into portable HIP C++
-
AMD leaps after launching AI chip that could challenge Nvidia dominance
-
AMD Hip SDK: Making CUDA Applications Run Across Consumer, Pro GPUs and APUs