cuda-api-wrappers VS soui

Compare cuda-api-wrappers vs soui and see what are their differences.

soui

SOUI是目前为数不多的轻量级可快速开发window桌面程序开源DirectUI库.其前身为Duiengine,更早期则是源自于金山卫士开源版本UI库Bkwin.经过多年持续更新方得此库 (by SOUI2)
GUI
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
cuda-api-wrappers soui
10 -
726 766
- 0.1%
8.8 4.3
6 days ago 5 months ago
C++ C++
BSD 3-clause "New" or "Revised" License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cuda-api-wrappers

Posts with mentions or reviews of cuda-api-wrappers. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-01.
  • VUDA: A Vulkan Implementation of CUDA
    3 projects | news.ycombinator.com | 1 Jul 2023
    1. This implements the clunky C-ish API; there's also the Modern-C++ API wrappers, with automatic error checking, RAII resource control etc.; see: https://github.com/eyalroz/cuda-api-wrappers (due disclosure: I'm the author)

    2. Implementing the _runtime_ API is not the right choice; it's important to implement the _driver_ API, otherwise you can't isolate contexts, dynamically add newly-compiled JIT kernels via modules etc.

    3. This is less than 3000 lines of code. Wrapping all of the core CUDA APIs (driver, runtime, NVTX, JIT compilation of CUDA-C++ and of PTX) took me > 14,000 LoC.

  • WezTerm is a GPU-accelerated cross-platform terminal emulator
    4 projects | news.ycombinator.com | 13 Mar 2023
    > since the underlying API's are still C/C++,

    If the use of GPUs is via CUDA, there are my https://github.com/eyalroz/cuda-api-wrappers/ which are RAII/CADRe, and therefore less unsafe. And on the Rust side - don't you need a bunch of unsafe code in the library enabling GPU support?

  • GNU Octave
    4 projects | news.ycombinator.com | 21 Jan 2023
    Given your criteria, you might want to consider (modern) C++.

    * Fast - in many cases faster than Rust, although the difference is inconsequential relative to Python-to-Rust improvement I guess.

    * _Really_ utilize CUDA, OpenCL, Vulcan etc. Specifically, Rust GPU is limited in its supported features, see: https://github.com/Rust-GPU/Rust-CUDA/blob/master/guide/src/... ...

    * Host-side use of CUDA is at least as nice, and probably nicer, than what you'll get with Rust. That is, provided you use my own Modern C++ wrappers for the CUDA APIs: https://github.com/eyalroz/cuda-api-wrappers/ :-) ... sorry for the shameless self-plug.

    * ... which brings me to another point: Richer offering of libraries for various needs than Rust, for you to possibly utilize.

    * Easier to share than Rust. A target system is less likely to have an appropriate version of Rust and the surrounding ecosystem.

    There are downsides, of course, but I was just applying your criteria.

  • How CUDA Programming Works
    1 project | news.ycombinator.com | 5 Jul 2022
    https://github.com/eyalroz/cuda-api-wrappers

    I try to address these and some other issues.

    We should also remember that NVIDIA artificially prevents its profiling tools from supporting OpenCL kernels - with no good reason.

  • are there communities for cuda devs so we can talk and grow together?
    1 project | /r/CUDA | 24 Jun 2022
    On the host side however - the API you use to orchestrate execution of kernels on GPUs, data transfers etc. - the official API is very C'ish, annoying and confusing. I have written C++'ish wrappers for it which many enjoy but are of course not officially supported or endorsed: https://github.com/eyalroz/cuda-api-wrappers
  • Thin C++-Flavored Wrappers for the CUDA APIs: Runtime, Driver, Nvrtc and NVTX
    1 project | news.ycombinator.com | 22 Jun 2022
  • Integrating the CUDA APIs (Driver, Runtime, JIT) in pleasant modern-C++ wrappers
    1 project | news.ycombinator.com | 26 Mar 2022
  • Cybercriminals who breached Nvidia issue one of the most unusual demands ever
    3 projects | news.ycombinator.com | 4 Mar 2022
    Oh, I really wish those hackers would release the sources rather than pursue their dumbass crypto-mining demands... "We decided to help mining and gaming community" - hurting the gaming community, helping the get-rich-quick "community".

    My own C++ wrappers for the CUDA APIs (shameless self-plug: https://github.com/eyalroz/cuda-api-wrappers/) would really benefit a lot from behind-the-curtains access to the driver; and even if I just know how the internal logic of the driver and the runtime works, without actually being able to hook into that logic - I would already be able to leverage this somewhat in my design considerations.

  • AMD’s Lisa Su Breaks Through the Silicon Ceiling
    1 project | news.ycombinator.com | 25 Sep 2021
    As a person making a living from being the "GPU guy" - I definitely agree.

    The ecosystem around AMD GPUs is quite small - and now that they seem to have abandoned OpenCL (possibly not their own fault though) - even that is put into question.

    But things are bad even on the NVIDIA side. Example of how bad: I had to write my own C++ bindings for the CUDA runtime API (https://github.com/eyalroz/cuda-api-wrappers/). You'd think they would have that after 13 years of CUDA being available, right? Wrong. I repeatedly tried to pitch this to them, but they seem to suffer from the "Not Invented Here" syndrome (https://learnosity.com/not-invented-here-syndrome-explained/). This despite me having a lot of respect for people like Mark Harris, Bryce Lelbach, Duane Merrill et alia, and their work.

    You're also rights about the "two kinds of brains" - or rather, it's not clear to me that the brains creating the silicon and the brains creating the software are in close enough cooperation.

    By the way - it is possible to extract a pretty distribution of CUDA to justify run 20 lines of GPGPU code, from their installer. But they won't be bothered to package this nicely for you.

  • How do I use gpus (c++)
    1 project | /r/learnprogramming | 2 May 2021
    Try Vulcan, or OpenCL. There are tons of wrappers for CUDA to make coding simpler ie https://github.com/eyalroz/cuda-api-wrappers

soui

Posts with mentions or reviews of soui. We have used some of these posts to build our list of alternatives and similar projects.

We haven't tracked posts mentioning soui yet.
Tracking mentions began in Dec 2020.

What are some alternatives?

When comparing cuda-api-wrappers and soui you can also consider the following projects:

imgui - Dear ImGui: Bloat-free Graphical User interface for C++ with minimal dependencies

Duilib

Elements C++ GUI library - Elements C++ GUI library

ILGPU - ILGPU JIT Compiler for high-performance .Net GPU programs

FTXUI - Features: - Functional style. Inspired by [1] and React - Simple and elegant syntax (in my opinion). - Support for UTF8 and fullwidth chars (→ 测试). - No dependencies. - Cross platform. Linux/mac (main target), Windows (experimental thanks to contributors), - WebAssembly. - Keyboard & mouse navigation. Operating systems: - linux emscripten - linux gcc - linux clang - windows msvc - mac clang

nana - a modern C++ GUI library

libRocket - libRocket - The HTML/CSS User Interface library

AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!

NanoGUI - Minimalistic GUI library for OpenGL