gpu-kernel-runner
ParallelReductionsBenchmark
gpu-kernel-runner | ParallelReductionsBenchmark | |
---|---|---|
1 | 2 | |
18 | 60 | |
- | - | |
6.7 | 4.7 | |
12 days ago | 6 days ago | |
C++ | C++ | |
BSD 3-clause "New" or "Revised" License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpu-kernel-runner
-
How Jensen Huang's Nvidia Is Powering the A.I. Revolution
> but all the alternatives require significant redesign in languages and tools people are unfamiliar with and we can't afford that overhead
Where I work, we've made it a principle to stay OpenCL-compatible even while going with NVIDIA due to their better-performing GPUs. I even go as far as writing kernels that can be compiled as either CUDA C++ or OpenCL-C, with a bit of duct-tape adapter headers:
https://github.com/eyalroz/gpu-kernel-runner/blob/main/kerne...
https://github.com/eyalroz/gpu-kernel-runner/blob/main/kerne...
of course, if you're working with higher-level frameworks then it's more difficult, and you depend on whether or not they provided different backends. So, no thrust for AMD GPUs, for example, but pytorch and TensorFlow do let you use them.
ParallelReductionsBenchmark
-
Failing to Reach 204 GB/S DDR4 Bandwidth
For the single threaded version, they have a data hazard on the sums that could be smoothed out with a little loop unrolling and separate variables.
But in the [threaded version](https://github.com/unum-cloud/ParallelReductions/blob/fd16d9...) they have separate slots for an accumulator but it's still in a shared vector, which most likely has the issue I described.
What are some alternatives?
BabelStream - STREAM, for lots of devices written in many programming models
MatX - An efficient C++17 GPU numerical computing library with Python-like syntax
ArrayFire - ArrayFire: a general purpose GPU library.
ispc - IntelĀ® Implicit SPMD Program Compiler
gpuowl - GPU Mersenne primality test.
alpaka - Abstraction Library for Parallel Kernel Acceleration :llama:
cuda_memtest - Fork of CUDA GPU memtest :eyeglasses:
eaminer - Heterogeneous Ethereum Miner with support for AMD, Intel and Nvidia GPUs using SYCL, OpenCL and CUDA backends
relion - Image-processing software for cryo-electron microscopy
amgcl - C++ library for solving large sparse linear systems with algebraic multigrid method
laser - The HPC toolbox: fused matrix multiplication, convolution, data-parallel strided tensor primitives, OpenMP facilities, SIMD, JIT Assembler, CPU detection, state-of-the-art vectorized BLAS for floats and integers
gpu_clock_stabilizer - Simple GPU clock stabilizer for consistent profiling