Cuda-api-wrappers Alternatives

Similar projects and alternatives to cuda-api-wrappers

  1. Killed by Google

    Part guillotine, part graveyard for Google's doomed apps, services, and hardware.

  2. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  3. alacritty

    A cross-platform, OpenGL terminal emulator.

  4. kitty

    Cross-platform, fast, feature-rich, GPU based terminal

  5. web

    157 cuda-api-wrappers VS web

    Grow Open Source (by gitcoinco)

  6. wezterm

    A GPU-accelerated cross-platform terminal emulator and multiplexer written by @wez and implemented in Rust

  7. Numba

    NumPy aware dynamic Python compiler using LLVM

  8. conan

    Conan - The open-source C and C++ package manager

  9. ninja

    a small build system with a focus on speed

  10. kompute

    General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.

  11. Rust-CUDA

    Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.

  12. tup

    Tup is a file-based build system.

  13. cupy

    NumPy & SciPy for GPU

  14. cupynumeric

    An Aspiring Drop-In Replacement for NumPy at Scale

  15. imgui

    Dear ImGui: Bloat-free Graphical User interface for C++ with minimal dependencies

  16. ILGPU

    ILGPU JIT Compiler for high-performance .Net GPU programs

  17. AdaptiveCpp

    Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!

  18. vuda

    VUDA is a header-only library based on Vulkan that provides a CUDA Runtime API interface for writing GPU-accelerated applications.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better cuda-api-wrappers alternative or higher similarity.

cuda-api-wrappers discussion

Log in or Post with

cuda-api-wrappers reviews and mentions

Posts with mentions or reviews of cuda-api-wrappers. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-11-28.
  • The Missing Nvidia GPU Glossary
    1 project | news.ycombinator.com | 14 Jan 2025
    NVIDIA does have a bunch of FOSS libraries - like CUB and Thrust (now part of CCCL). But - they tend to suffer from "not invented here" syndrome [1] ; so they seem to avoid collaboration on FOSS they don't manage/control by themselves.

    I have a bit of a chip on my shoulder here, since I've been trying to pitch my Modern C++ API wrappers to them for years, and even though I've gotten some appreciative comments from individuals, they have shown zero interest.

    https://github.com/eyalroz/cuda-api-wrappers/

    There is also their driver, which is supposedly "open source", but actually none of the logic is exposed to you. Their runtime library is closed too, their management utility (nvidia-smi), their LLVM-based compiler, their profilers, their OpenCL stack :-(

    I must say they do have relatively extensive documentation, even if it doesn't cover everything.

    [1] - https://en.wikipedia.org/wiki/Not_invented_here

  • The Success and Failure of Ninja (2020)
    5 projects | news.ycombinator.com | 28 Nov 2024
    > users of ninja ... all Meson projects, which appears to increasingly be the build system used in the free software world;

    So, AFAICT, that hasn't turned out to be the case.

    > the code ends up being less important than the architecture, and the architecture ends up being less important than social issues.

    Well... sometimes. Other times, the fact that there's good code that does something goes a very long way, and people live with the architectural faults. And as for the social issues - they rarely stand in opposition to the code itself.

    > Some pieces of Ninja took struggle to get to and then are obvious in retrospect. I think this is true of much of math

    Yup. And the some of the rest of math becomes obvious when some re-derives it using alternative and more convenient/powerful techniques.

    > fetching file status from Linux is extremely fast.

    It of course depends on what your definition of "fast" is. In the extremely-slow world of frequent system calls and file I/O, I guess one could say that.

    > I think the reason so few succeed at this is that it's just too tempting to mix the layers.

    As an author of a library that also focuses on being a "layer" of sorts (https://github.com/eyalroz/cuda-api-wrappers/), I struggle with this temptation a lot! Especially when, like the author says, the boundaries of the layers are not as clear as one might imagine.

  • CuPy: NumPy and SciPy for GPU
    8 projects | news.ycombinator.com | 20 Sep 2024
    > probably the easiest way to interface with custom CUDA kernels

    In Python? Perhaps. Generally? No, it isn't. Full power of the CUDA APIs including all runtime compilation options etc. : https://github.com/eyalroz/cuda-api-wrappers/

    Example:

      // the source could be a string literal, loaded from a .cu file, etc.
  • Kompute – Vulkan Alternative to CUDA
    2 projects | news.ycombinator.com | 19 Jul 2024
    This is _not_ an alternative to CUDA nor to OpenCL. It has some high-level and opinionated API [1], which covers a part (rather small part) of the API of each of those.

    It may, _in principle_, have been developed - with much more work than has gone into it - into such an alternative; but I am actually not sure of that since I have poor command of Vulcan. I got suspicious being someone who maintains C++ API wrappers for CUDA myself [2], and know that just doing that is a lot more code and a lot more work.

    [1] - I assume it is opinionated to cater to CNN simulation for large language models, and basically not much more.

    [2] - https://github.com/eyalroz/cuda-api-wrappers/

  • VUDA: A Vulkan Implementation of CUDA
    3 projects | news.ycombinator.com | 1 Jul 2023
    1. This implements the clunky C-ish API; there's also the Modern-C++ API wrappers, with automatic error checking, RAII resource control etc.; see: https://github.com/eyalroz/cuda-api-wrappers (due disclosure: I'm the author)

    2. Implementing the _runtime_ API is not the right choice; it's important to implement the _driver_ API, otherwise you can't isolate contexts, dynamically add newly-compiled JIT kernels via modules etc.

    3. This is less than 3000 lines of code. Wrapping all of the core CUDA APIs (driver, runtime, NVTX, JIT compilation of CUDA-C++ and of PTX) took me > 14,000 LoC.

  • WezTerm is a GPU-accelerated cross-platform terminal emulator
    4 projects | news.ycombinator.com | 13 Mar 2023
    > since the underlying API's are still C/C++,

    If the use of GPUs is via CUDA, there are my https://github.com/eyalroz/cuda-api-wrappers/ which are RAII/CADRe, and therefore less unsafe. And on the Rust side - don't you need a bunch of unsafe code in the library enabling GPU support?

  • GNU Octave
    4 projects | news.ycombinator.com | 21 Jan 2023
    Given your criteria, you might want to consider (modern) C++.

    * Fast - in many cases faster than Rust, although the difference is inconsequential relative to Python-to-Rust improvement I guess.

    * _Really_ utilize CUDA, OpenCL, Vulcan etc. Specifically, Rust GPU is limited in its supported features, see: https://github.com/Rust-GPU/Rust-CUDA/blob/master/guide/src/... ...

    * Host-side use of CUDA is at least as nice, and probably nicer, than what you'll get with Rust. That is, provided you use my own Modern C++ wrappers for the CUDA APIs: https://github.com/eyalroz/cuda-api-wrappers/ :-) ... sorry for the shameless self-plug.

    * ... which brings me to another point: Richer offering of libraries for various needs than Rust, for you to possibly utilize.

    * Easier to share than Rust. A target system is less likely to have an appropriate version of Rust and the surrounding ecosystem.

    There are downsides, of course, but I was just applying your criteria.

  • How CUDA Programming Works
    1 project | news.ycombinator.com | 5 Jul 2022
    https://github.com/eyalroz/cuda-api-wrappers

    I try to address these and some other issues.

    We should also remember that NVIDIA artificially prevents its profiling tools from supporting OpenCL kernels - with no good reason.

  • are there communities for cuda devs so we can talk and grow together?
    1 project | /r/CUDA | 24 Jun 2022
    On the host side however - the API you use to orchestrate execution of kernels on GPUs, data transfers etc. - the official API is very C'ish, annoying and confusing. I have written C++'ish wrappers for it which many enjoy but are of course not officially supported or endorsed: https://github.com/eyalroz/cuda-api-wrappers
  • Thin C++-Flavored Wrappers for the CUDA APIs: Runtime, Driver, Nvrtc and NVTX
    1 project | news.ycombinator.com | 22 Jun 2022
  • A note from our sponsor - SaaSHub
    www.saashub.com | 17 Jan 2025
    SaaSHub helps you find the best software and product alternatives Learn more →

Stats

Basic cuda-api-wrappers repo stats
14
811
8.9
14 days ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com

Did you know that C++ is
the 7th most popular programming language
based on number of references?