SaaSHub helps you find the best software and product alternatives Learn more →
Cuda-api-wrappers Alternatives
Similar projects and alternatives to cuda-api-wrappers
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
-
-
wezterm
A GPU-accelerated cross-platform terminal emulator and multiplexer written by @wez and implemented in Rust
-
-
-
-
kompute
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
-
-
-
-
-
-
-
AdaptiveCpp
Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
-
vuda
VUDA is a header-only library based on Vulkan that provides a CUDA Runtime API interface for writing GPU-accelerated applications.
-
cuda-api-wrappers discussion
cuda-api-wrappers reviews and mentions
-
The Missing Nvidia GPU Glossary
NVIDIA does have a bunch of FOSS libraries - like CUB and Thrust (now part of CCCL). But - they tend to suffer from "not invented here" syndrome [1] ; so they seem to avoid collaboration on FOSS they don't manage/control by themselves.
I have a bit of a chip on my shoulder here, since I've been trying to pitch my Modern C++ API wrappers to them for years, and even though I've gotten some appreciative comments from individuals, they have shown zero interest.
https://github.com/eyalroz/cuda-api-wrappers/
There is also their driver, which is supposedly "open source", but actually none of the logic is exposed to you. Their runtime library is closed too, their management utility (nvidia-smi), their LLVM-based compiler, their profilers, their OpenCL stack :-(
I must say they do have relatively extensive documentation, even if it doesn't cover everything.
[1] - https://en.wikipedia.org/wiki/Not_invented_here
-
The Success and Failure of Ninja (2020)
> users of ninja ... all Meson projects, which appears to increasingly be the build system used in the free software world;
So, AFAICT, that hasn't turned out to be the case.
> the code ends up being less important than the architecture, and the architecture ends up being less important than social issues.
Well... sometimes. Other times, the fact that there's good code that does something goes a very long way, and people live with the architectural faults. And as for the social issues - they rarely stand in opposition to the code itself.
> Some pieces of Ninja took struggle to get to and then are obvious in retrospect. I think this is true of much of math
Yup. And the some of the rest of math becomes obvious when some re-derives it using alternative and more convenient/powerful techniques.
> fetching file status from Linux is extremely fast.
It of course depends on what your definition of "fast" is. In the extremely-slow world of frequent system calls and file I/O, I guess one could say that.
> I think the reason so few succeed at this is that it's just too tempting to mix the layers.
As an author of a library that also focuses on being a "layer" of sorts (https://github.com/eyalroz/cuda-api-wrappers/), I struggle with this temptation a lot! Especially when, like the author says, the boundaries of the layers are not as clear as one might imagine.
-
CuPy: NumPy and SciPy for GPU
> probably the easiest way to interface with custom CUDA kernels
In Python? Perhaps. Generally? No, it isn't. Full power of the CUDA APIs including all runtime compilation options etc. : https://github.com/eyalroz/cuda-api-wrappers/
Example:
// the source could be a string literal, loaded from a .cu file, etc.
-
Kompute – Vulkan Alternative to CUDA
This is _not_ an alternative to CUDA nor to OpenCL. It has some high-level and opinionated API [1], which covers a part (rather small part) of the API of each of those.
It may, _in principle_, have been developed - with much more work than has gone into it - into such an alternative; but I am actually not sure of that since I have poor command of Vulcan. I got suspicious being someone who maintains C++ API wrappers for CUDA myself [2], and know that just doing that is a lot more code and a lot more work.
[1] - I assume it is opinionated to cater to CNN simulation for large language models, and basically not much more.
[2] - https://github.com/eyalroz/cuda-api-wrappers/
-
VUDA: A Vulkan Implementation of CUDA
1. This implements the clunky C-ish API; there's also the Modern-C++ API wrappers, with automatic error checking, RAII resource control etc.; see: https://github.com/eyalroz/cuda-api-wrappers (due disclosure: I'm the author)
2. Implementing the _runtime_ API is not the right choice; it's important to implement the _driver_ API, otherwise you can't isolate contexts, dynamically add newly-compiled JIT kernels via modules etc.
3. This is less than 3000 lines of code. Wrapping all of the core CUDA APIs (driver, runtime, NVTX, JIT compilation of CUDA-C++ and of PTX) took me > 14,000 LoC.
-
WezTerm is a GPU-accelerated cross-platform terminal emulator
> since the underlying API's are still C/C++,
If the use of GPUs is via CUDA, there are my https://github.com/eyalroz/cuda-api-wrappers/ which are RAII/CADRe, and therefore less unsafe. And on the Rust side - don't you need a bunch of unsafe code in the library enabling GPU support?
-
GNU Octave
Given your criteria, you might want to consider (modern) C++.
* Fast - in many cases faster than Rust, although the difference is inconsequential relative to Python-to-Rust improvement I guess.
* _Really_ utilize CUDA, OpenCL, Vulcan etc. Specifically, Rust GPU is limited in its supported features, see: https://github.com/Rust-GPU/Rust-CUDA/blob/master/guide/src/... ...
* Host-side use of CUDA is at least as nice, and probably nicer, than what you'll get with Rust. That is, provided you use my own Modern C++ wrappers for the CUDA APIs: https://github.com/eyalroz/cuda-api-wrappers/ :-) ... sorry for the shameless self-plug.
* ... which brings me to another point: Richer offering of libraries for various needs than Rust, for you to possibly utilize.
* Easier to share than Rust. A target system is less likely to have an appropriate version of Rust and the surrounding ecosystem.
There are downsides, of course, but I was just applying your criteria.
-
How CUDA Programming Works
https://github.com/eyalroz/cuda-api-wrappers
I try to address these and some other issues.
We should also remember that NVIDIA artificially prevents its profiling tools from supporting OpenCL kernels - with no good reason.
-
are there communities for cuda devs so we can talk and grow together?
On the host side however - the API you use to orchestrate execution of kernels on GPUs, data transfers etc. - the official API is very C'ish, annoying and confusing. I have written C++'ish wrappers for it which many enjoy but are of course not officially supported or endorsed: https://github.com/eyalroz/cuda-api-wrappers
- Thin C++-Flavored Wrappers for the CUDA APIs: Runtime, Driver, Nvrtc and NVTX
-
A note from our sponsor - SaaSHub
www.saashub.com | 17 Jan 2025
Stats
eyalroz/cuda-api-wrappers is an open source project licensed under BSD 3-clause "New" or "Revised" License which is an OSI approved license.
The primary programming language of cuda-api-wrappers is C++.
Popular Comparisons
- cuda-api-wrappers VS imgui
- cuda-api-wrappers VS Duilib
- cuda-api-wrappers VS ILGPU
- cuda-api-wrappers VS AdaptiveCpp
- cuda-api-wrappers VS Elements C++ GUI library
- cuda-api-wrappers VS nana
- cuda-api-wrappers VS wxWidgets
- cuda-api-wrappers VS FTXUI
- cuda-api-wrappers VS sciter
- cuda-api-wrappers VS Stacer