cuda-api-wrappers
AdaptiveCpp
cuda-api-wrappers | AdaptiveCpp | |
---|---|---|
10 | 19 | |
731 | 1,040 | |
- | 2.2% | |
8.5 | 9.7 | |
3 days ago | 8 days ago | |
C++ | C++ | |
BSD 3-clause "New" or "Revised" License | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cuda-api-wrappers
-
VUDA: A Vulkan Implementation of CUDA
1. This implements the clunky C-ish API; there's also the Modern-C++ API wrappers, with automatic error checking, RAII resource control etc.; see: https://github.com/eyalroz/cuda-api-wrappers (due disclosure: I'm the author)
2. Implementing the _runtime_ API is not the right choice; it's important to implement the _driver_ API, otherwise you can't isolate contexts, dynamically add newly-compiled JIT kernels via modules etc.
3. This is less than 3000 lines of code. Wrapping all of the core CUDA APIs (driver, runtime, NVTX, JIT compilation of CUDA-C++ and of PTX) took me > 14,000 LoC.
-
WezTerm is a GPU-accelerated cross-platform terminal emulator
> since the underlying API's are still C/C++,
If the use of GPUs is via CUDA, there are my https://github.com/eyalroz/cuda-api-wrappers/ which are RAII/CADRe, and therefore less unsafe. And on the Rust side - don't you need a bunch of unsafe code in the library enabling GPU support?
-
GNU Octave
Given your criteria, you might want to consider (modern) C++.
* Fast - in many cases faster than Rust, although the difference is inconsequential relative to Python-to-Rust improvement I guess.
* _Really_ utilize CUDA, OpenCL, Vulcan etc. Specifically, Rust GPU is limited in its supported features, see: https://github.com/Rust-GPU/Rust-CUDA/blob/master/guide/src/... ...
* Host-side use of CUDA is at least as nice, and probably nicer, than what you'll get with Rust. That is, provided you use my own Modern C++ wrappers for the CUDA APIs: https://github.com/eyalroz/cuda-api-wrappers/ :-) ... sorry for the shameless self-plug.
* ... which brings me to another point: Richer offering of libraries for various needs than Rust, for you to possibly utilize.
* Easier to share than Rust. A target system is less likely to have an appropriate version of Rust and the surrounding ecosystem.
There are downsides, of course, but I was just applying your criteria.
-
How CUDA Programming Works
https://github.com/eyalroz/cuda-api-wrappers
I try to address these and some other issues.
We should also remember that NVIDIA artificially prevents its profiling tools from supporting OpenCL kernels - with no good reason.
-
are there communities for cuda devs so we can talk and grow together?
On the host side however - the API you use to orchestrate execution of kernels on GPUs, data transfers etc. - the official API is very C'ish, annoying and confusing. I have written C++'ish wrappers for it which many enjoy but are of course not officially supported or endorsed: https://github.com/eyalroz/cuda-api-wrappers
- Thin C++-Flavored Wrappers for the CUDA APIs: Runtime, Driver, Nvrtc and NVTX
- Integrating the CUDA APIs (Driver, Runtime, JIT) in pleasant modern-C++ wrappers
-
Cybercriminals who breached Nvidia issue one of the most unusual demands ever
Oh, I really wish those hackers would release the sources rather than pursue their dumbass crypto-mining demands... "We decided to help mining and gaming community" - hurting the gaming community, helping the get-rich-quick "community".
My own C++ wrappers for the CUDA APIs (shameless self-plug: https://github.com/eyalroz/cuda-api-wrappers/) would really benefit a lot from behind-the-curtains access to the driver; and even if I just know how the internal logic of the driver and the runtime works, without actually being able to hook into that logic - I would already be able to leverage this somewhat in my design considerations.
-
AMD’s Lisa Su Breaks Through the Silicon Ceiling
As a person making a living from being the "GPU guy" - I definitely agree.
The ecosystem around AMD GPUs is quite small - and now that they seem to have abandoned OpenCL (possibly not their own fault though) - even that is put into question.
But things are bad even on the NVIDIA side. Example of how bad: I had to write my own C++ bindings for the CUDA runtime API (https://github.com/eyalroz/cuda-api-wrappers/). You'd think they would have that after 13 years of CUDA being available, right? Wrong. I repeatedly tried to pitch this to them, but they seem to suffer from the "Not Invented Here" syndrome (https://learnosity.com/not-invented-here-syndrome-explained/). This despite me having a lot of respect for people like Mark Harris, Bryce Lelbach, Duane Merrill et alia, and their work.
You're also rights about the "two kinds of brains" - or rather, it's not clear to me that the brains creating the silicon and the brains creating the software are in close enough cooperation.
By the way - it is possible to extract a pretty distribution of CUDA to justify run 20 lines of GPGPU code, from their installer. But they won't be bothered to package this nicely for you.
-
How do I use gpus (c++)
Try Vulcan, or OpenCL. There are tons of wrappers for CUDA to make coding simpler ie https://github.com/eyalroz/cuda-api-wrappers
AdaptiveCpp
-
What Every Developer Should Know About GPU Computing
Sapphire Rapids is a CPU.
AMD's primary focus for a GPU software ecosystem these days seems to be implementing CUDA with s/cuda/hip, so AMD directly supports and encourages running GPU software written in CUDA on AMD GPUs.
The only implementation for sycl on AMD GPUs that I can find is a hobby project that apparently is not allowed to use either the 'hip' or 'sycl' names. https://github.com/AdaptiveCpp/AdaptiveCpp
-
AMD May Get Across the CUDA Moat
Not natively, but AdaptiveCpp (previously hiSycl, then OpenSycl) has a single source single compiler pass, where they basically store LLVM IR as an intermediate representation.
https://github.com/AdaptiveCpp/AdaptiveCpp/blob/develop/doc/...
Performance penalty was within ew precents, at least according to the paper (figure 9 and 10)
-
Offloading standard C++ PSTL to Intel, NVIDIA and AMD GPUs with AdaptiveCpp
AdaptiveCpp (formerly known as hipSYCL) is an independent, open source, clang-based heterogeneous C++ compiler project. I thought some of you might be interested in knowing that we recently added support to offload standard C++ parallel STL algorithms to GPUs from all major vendors. E.g.:
-
AMD's HIPRT Working Its Way To Blender With ~25% Faster Rendering
In fact SYCL was initially called hipSYCL because it is based on AMD's ROCm/HIP. AMD had hipSYCL code running on the Frontier supercomputer four years ago at least and continues to support it.
-
hipSYCL can now generate a binary that runs on any Intel/NVIDIA/AMD GPU - in a single compiler pass. It is now the first single-pass SYCL compiler, and the first with unified code representation across backends.
Apple Silicon support through Metal is something that is actively discussed in hipSYCL. See https://github.com/illuhad/hipSYCL/issues/864 https://github.com/illuhad/hipSYCL/issues/460 (loooong discussion)
-
Bringing Nvidia® and AMD support to oneAPI
But really, the DPC++ part of oneAPI (which is many APIs) is just SYCL + extensions, and there are several other SYCL implementations which have already featured CUDA and Hip (AMD) support for a long time. The most popular and widely-used is hipSYCL, which we've been using in an HPC context on NV hardware for over 4 years now.
-
Intel oneAPI 2023 Released - AMD & NVIDIA Plugins Available
Unfortunately, the AMD and Nvidia plugins are proprietary. AMD users are probably better served with hipSYCL, if they somehow find an application using SYCL...
-
There is framework for everything.
Also, you might want to take a look at an implementation like hipSYCL :)
-
The Next Platform: "Intel Takes The SYCL To Nvidia's CUDA With Migration Tool"
Yup. SYCL is the future: https://github.com/illuhad/hipSYCL
-
Phoronix: "Intel's Vulkan Linux Driver Adds Experimental Mesh Shader Support For DG2/Alchemist"
ROCm is completely independent from these. It's a compute stack containing an OpenCL implementation for Radeon GPUs, plus a CUDA-like language called HIP which can be compiled to either device code for Radeon GPUs or to PTX to work with Nvidia GPUs. However, some researchers also created hipSYCL that allows SYCL to run atop HIP; you can think of it like DXVK - the program contains the DirectX/SYCL API, and DXVK/hipSYCL converts it to Vulkan/HIP (with one difference - DXVK does the conversion at runtime, while hipSYCL does it at compile time).
What are some alternatives?
imgui - Dear ImGui: Bloat-free Graphical User interface for C++ with minimal dependencies
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
Duilib
HIP-CPU - An implementation of HIP that works on CPUs, across OSes.
ILGPU - ILGPU JIT Compiler for high-performance .Net GPU programs
triSYCL - Generic system-wide modern C++ for heterogeneous platforms with SYCL from Khronos Group
nana - a modern C++ GUI library
HIP - HIP: C++ Heterogeneous-Compute Interface for Portability
Elements C++ GUI library - Elements C++ GUI library
cuda_memtest - Fork of CUDA GPU memtest :eyeglasses:
FTXUI - Features: - Functional style. Inspired by [1] and React - Simple and elegant syntax (in my opinion). - Support for UTF8 and fullwidth chars (→ 测试). - No dependencies. - Cross platform. Linux/mac (main target), Windows (experimental thanks to contributors), - WebAssembly. - Keyboard & mouse navigation. Operating systems: - linux emscripten - linux gcc - linux clang - windows msvc - mac clang
gpuowl - GPU Mersenne primality test.