gpu-kernel-runner
occa
gpu-kernel-runner | occa | |
---|---|---|
1 | 1 | |
18 | 379 | |
- | 0.0% | |
6.7 | 7.8 | |
11 days ago | 24 days ago | |
C++ | C++ | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpu-kernel-runner
-
How Jensen Huang's Nvidia Is Powering the A.I. Revolution
> but all the alternatives require significant redesign in languages and tools people are unfamiliar with and we can't afford that overhead
Where I work, we've made it a principle to stay OpenCL-compatible even while going with NVIDIA due to their better-performing GPUs. I even go as far as writing kernels that can be compiled as either CUDA C++ or OpenCL-C, with a bit of duct-tape adapter headers:
https://github.com/eyalroz/gpu-kernel-runner/blob/main/kerne...
https://github.com/eyalroz/gpu-kernel-runner/blob/main/kerne...
of course, if you're working with higher-level frameworks then it's more difficult, and you depend on whether or not they provided different backends. So, no thrust for AMD GPUs, for example, but pytorch and TensorFlow do let you use them.
occa
What are some alternatives?
BabelStream - STREAM, for lots of devices written in many programming models
gtensor - GTensor is a multi-dimensional array C++14 header-only library for hybrid GPU development.
ArrayFire - ArrayFire: a general purpose GPU library.
ParallelReductionsBenchmark - Thrust, CUB, TBB, AVX2, CUDA, OpenCL, OpenMP, SyCL - all it takes to sum a lot of numbers fast!