The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →
Cuda-api-wrappers Alternatives
Similar projects and alternatives to cuda-api-wrappers
-
-
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
-
wezterm
A GPU-accelerated cross-platform terminal emulator and multiplexer written by @wez and implemented in Rust
-
-
Rust-CUDA
Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
imgui
Dear ImGui: Bloat-free Graphical User interface for C++ with minimal dependencies
-
-
AdaptiveCpp
Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
-
vuda
VUDA is a header-only library based on Vulkan that provides a CUDA Runtime API interface for writing GPU-accelerated applications.
-
-
FTXUI
Features: - Functional style. Inspired by [1] and React - Simple and elegant syntax (in my opinion). - Support for UTF8 and fullwidth chars (→ 测试). - No dependencies. - Cross platform. Linux/mac (main target), Windows (experimental thanks to contributors), - WebAssembly. - Keyboard & mouse navigation. Operating systems: - linux emscripten - linux gcc - linux clang - windows msvc - mac clang
-
-
-
-
sciter
Sciter: the Embeddable HTML/CSS/JS engine for modern UI development
-
-
Stacer
Linux System Optimizer and Monitoring - https://oguzhaninan.github.io/Stacer-Web
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
cuda-api-wrappers reviews and mentions
-
VUDA: A Vulkan Implementation of CUDA
1. This implements the clunky C-ish API; there's also the Modern-C++ API wrappers, with automatic error checking, RAII resource control etc.; see: https://github.com/eyalroz/cuda-api-wrappers (due disclosure: I'm the author)
2. Implementing the _runtime_ API is not the right choice; it's important to implement the _driver_ API, otherwise you can't isolate contexts, dynamically add newly-compiled JIT kernels via modules etc.
3. This is less than 3000 lines of code. Wrapping all of the core CUDA APIs (driver, runtime, NVTX, JIT compilation of CUDA-C++ and of PTX) took me > 14,000 LoC.
-
WezTerm is a GPU-accelerated cross-platform terminal emulator
> since the underlying API's are still C/C++,
If the use of GPUs is via CUDA, there are my https://github.com/eyalroz/cuda-api-wrappers/ which are RAII/CADRe, and therefore less unsafe. And on the Rust side - don't you need a bunch of unsafe code in the library enabling GPU support?
-
GNU Octave
Given your criteria, you might want to consider (modern) C++.
* Fast - in many cases faster than Rust, although the difference is inconsequential relative to Python-to-Rust improvement I guess.
* _Really_ utilize CUDA, OpenCL, Vulcan etc. Specifically, Rust GPU is limited in its supported features, see: https://github.com/Rust-GPU/Rust-CUDA/blob/master/guide/src/... ...
* Host-side use of CUDA is at least as nice, and probably nicer, than what you'll get with Rust. That is, provided you use my own Modern C++ wrappers for the CUDA APIs: https://github.com/eyalroz/cuda-api-wrappers/ :-) ... sorry for the shameless self-plug.
* ... which brings me to another point: Richer offering of libraries for various needs than Rust, for you to possibly utilize.
* Easier to share than Rust. A target system is less likely to have an appropriate version of Rust and the surrounding ecosystem.
There are downsides, of course, but I was just applying your criteria.
-
How CUDA Programming Works
https://github.com/eyalroz/cuda-api-wrappers
I try to address these and some other issues.
We should also remember that NVIDIA artificially prevents its profiling tools from supporting OpenCL kernels - with no good reason.
-
are there communities for cuda devs so we can talk and grow together?
On the host side however - the API you use to orchestrate execution of kernels on GPUs, data transfers etc. - the official API is very C'ish, annoying and confusing. I have written C++'ish wrappers for it which many enjoy but are of course not officially supported or endorsed: https://github.com/eyalroz/cuda-api-wrappers
- Thin C++-Flavored Wrappers for the CUDA APIs: Runtime, Driver, Nvrtc and NVTX
- Integrating the CUDA APIs (Driver, Runtime, JIT) in pleasant modern-C++ wrappers
-
Cybercriminals who breached Nvidia issue one of the most unusual demands ever
Oh, I really wish those hackers would release the sources rather than pursue their dumbass crypto-mining demands... "We decided to help mining and gaming community" - hurting the gaming community, helping the get-rich-quick "community".
My own C++ wrappers for the CUDA APIs (shameless self-plug: https://github.com/eyalroz/cuda-api-wrappers/) would really benefit a lot from behind-the-curtains access to the driver; and even if I just know how the internal logic of the driver and the runtime works, without actually being able to hook into that logic - I would already be able to leverage this somewhat in my design considerations.
-
AMD’s Lisa Su Breaks Through the Silicon Ceiling
As a person making a living from being the "GPU guy" - I definitely agree.
The ecosystem around AMD GPUs is quite small - and now that they seem to have abandoned OpenCL (possibly not their own fault though) - even that is put into question.
But things are bad even on the NVIDIA side. Example of how bad: I had to write my own C++ bindings for the CUDA runtime API (https://github.com/eyalroz/cuda-api-wrappers/). You'd think they would have that after 13 years of CUDA being available, right? Wrong. I repeatedly tried to pitch this to them, but they seem to suffer from the "Not Invented Here" syndrome (https://learnosity.com/not-invented-here-syndrome-explained/). This despite me having a lot of respect for people like Mark Harris, Bryce Lelbach, Duane Merrill et alia, and their work.
You're also rights about the "two kinds of brains" - or rather, it's not clear to me that the brains creating the silicon and the brains creating the software are in close enough cooperation.
By the way - it is possible to extract a pretty distribution of CUDA to justify run 20 lines of GPGPU code, from their installer. But they won't be bothered to package this nicely for you.
-
How do I use gpus (c++)
Try Vulcan, or OpenCL. There are tons of wrappers for CUDA to make coding simpler ie https://github.com/eyalroz/cuda-api-wrappers
-
A note from our sponsor - WorkOS
workos.com | 18 Apr 2024
Stats
eyalroz/cuda-api-wrappers is an open source project licensed under BSD 3-clause "New" or "Revised" License which is an OSI approved license.
The primary programming language of cuda-api-wrappers is C++.
Popular Comparisons
- cuda-api-wrappers VS imgui
- cuda-api-wrappers VS Duilib
- cuda-api-wrappers VS ILGPU
- cuda-api-wrappers VS AdaptiveCpp
- cuda-api-wrappers VS nana
- cuda-api-wrappers VS FTXUI
- cuda-api-wrappers VS Elements C++ GUI library
- cuda-api-wrappers VS wxWidgets
- cuda-api-wrappers VS sciter
- cuda-api-wrappers VS Stacer