HIP
HIP-CPU
HIP | HIP-CPU | |
---|---|---|
29 | 5 | |
3,462 | 105 | |
1.5% | 3.8% | |
8.9 | 7.2 | |
1 day ago | about 2 months ago | |
C++ | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
HIP
- Hip: Runtime API and Kernel Language for Portable Apps for AMD and Nvidia GPUs
-
Open-source project ZLUDA lets CUDA apps run on AMD GPUs
Is it perhaps because they want people to use HIP?
> HIP is very thin and has little or no performance impact over coding directly in CUDA mode.
> The HIPIFY tools automatically convert source from CUDA to HIP.
1. https://github.com/ROCm/HIP
-
AMD's Next GPU Is a 3D-Integrated Superchip
AMD has released HIP and a tool called HIPIFY which kind of behaves like this but at the source level¹. Rather than try and just translate CUDA to work on AMD compute they are more focused on higher level tooling.
Currently they seem to have a particular focus on AI frameworks and tools like PyTorch/Tensorflow/ONNX. They have sponsored and helped with a lot of PyTorch development for example, so PyTorch support for AMD is much better than it was this time last year².
¹(https://github.com/ROCm/HIP)
²(https://pytorch.org/blog/experience-power-pytorch-2.0/)
-
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
> what would be the point for someone to add ROCm support to various pieces of software which currently require CUDA
It isn't just old cards though, CUDA is a point of centralization on a single provider during a time when access to that providers higher end cards isn't even available and that is causing people to look elsewhere.
ROCm supports CUDA through the included HIP projects...
https://github.com/ROCm/HIP
https://github.com/ROCm/HIPCC
https://github.com/ROCm/HIPIFY
The later will regex replace your CUDA methods with HIP methods. If it is as easy as running hipify on your codebase (or just coding to HIP apis), it certainly makes sense to do so.
-
Nvidia on the Mountaintop
AMD's equivalent is HIP [1], for sufficiently flexible definitions of "equivalent". I can't speak to how complete/correct/performant it is (I'm just a guy running tutorial/toy-level ML stuff on an RDNA1 card), but part of AMD's problem is that it might not practically matter how well they do this because the broader ecosystem support specifically for the CUDA stack is so entrenched.
[1] https://github.com/ROCm-Developer-Tools/HIP
- Stable Diffusion in pure C/C++
- Would love to hear your information and knowledge to simplify my understanding on AMD's positioning in the AI market
-
Ask HN: C++ still dominates on GPUs, why not Rust?
From what I know, modern GPUs are still programmed with C++ exclusively. See CUDA [0] for Nvidia and ROCm [1] for AMD.
Why is this? Why Rust is not loved there?
[0] https://docs.nvidia.com/cuda/
[1] https://github.com/ROCm-Developer-Tools/HIP
-
[P] RWKV C++ Cuda library with no dependencies, no torch, and no python
Go ahead and try to ship ROCm code that works on multiple consumer graphics cards on Linux, MacOS, and Windows. As an example of how much AMD cares about it, the installation notes linked to in the readme returns a 404.
-
Someone found a ROCm 5.5 RC Docker Container that works on 7000 series GPUs
The big whoop for ROCm is that AMD invested a considerable amount of engineering time and talent into a tool they call hip. Basically, it's an analysis tool that does its best to port proprietary Nvidia CUDA-style code - which due to various smelly reasons rules the roost - to code that can happily run on AMD graphics cards, and presumably others. Intel has a similar thing going with OneAPI. They've done this whilst working on porting a lot of their code base to the linux AMGPU open source kernel driver, as well.
HIP-CPU
- HIP CPU
- [P] Pure C/C++ port of OpenAI's Whisper
-
AMD publishes GPUFORT as Open Source to address CUDA’s dominance
If I'm reading this right, this is Fortran's equivalent of HIP, i.e. a way to (semi-)automatically convert CUDA-based solution to a more backend-independent one so that the same source can be run both on CUDA and ROCm GPUs (and potentially more; e.g. they also have an experimental CPU backend).
-
Test Coverage with CUDA
So, I know that you asked about cuda, but this might actually be possible in hip, and you can convert your code to hip relatively easily. The path would be to use the CPU implementation (https://github.com/ROCm-Developer-Tools/HIP-CPU) and then run your code coverage on that.
What are some alternatives?
AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
ZLUDA - CUDA on AMD GPUs
libcudacxx - [ARCHIVED] The C++ Standard Library for your entire system. See https://github.com/NVIDIA/cccl
futhark - :boom::computer::boom: A data-parallel functional programming language
rocFFT - Next generation FFT implementation for ROCm
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
stdgpu - stdgpu: Efficient STL-like Data Structures on the GPU
ginkgo - Numerical linear algebra software package
XNNPACK - High-efficiency floating-point neural network inference operators for mobile, server, and Web
rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.