cunumeric VS CudaPy

Compare cunumeric vs CudaPy and see what are their differences.

cunumeric

An Aspiring Drop-In Replacement for NumPy at Scale (by nv-legate)

CudaPy

CudaPy is a runtime library that lets Python programmers access NVIDIA's CUDA parallel computation API. (by oulgen)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
cunumeric CudaPy
9 1
595 4
1.2% -
8.5 0.0
3 days ago over 8 years ago
Python Haskell
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cunumeric

Posts with mentions or reviews of cunumeric. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-09.
  • Announcing Chapel 1.32
    6 projects | news.ycombinator.com | 9 Oct 2023
  • Is Parallel Programming Hard, and, If So, What Can You Do About It? [pdf]
    4 projects | news.ycombinator.com | 19 Feb 2023
    I am biased because this is my research area, but I have to respectfully disagree. Actor models are awful, and the only reason it's not obvious is because everything else is even more awful.

    But if you look at e.g., the recent work on task-based models, you'll see that you can have literally sequential programs that parallelize automatically. No message passing, no synchronization, no data races, no deadlocks. Read your programs as if they're sequential, and you immediately understand their semantics. Some of these systems are able to scale to thousands of nodes.

    An interesting example of this is cuNumeric, which allows you to take sequential Python programs that use NumPy, and by changing one line (the import statement), run automatically on clusters of GPUs. It is 100% pure awesomeness.

    https://github.com/nv-legate/cunumeric

    (I don't work on cuNumeric, but I do work on the runtime framework that cuNumeric uses.)

  • GPT in 60 Lines of NumPy
    9 projects | news.ycombinator.com | 9 Feb 2023
    I know this probably isn't intended for performance, but it would be fun to run this in cuNumeric [1] and see how it scales.

    [1]: https://github.com/nv-legate/cunumeric

  • Dask – a flexible library for parallel computing in Python
    8 projects | news.ycombinator.com | 17 Nov 2021
    If you want built-in GPU support (and distributed), you should check out cuNumeric (released by NVIDIA in the last week or so). Also avoids needing to manually specify chunk sizes, like it says in a sibling comment.

    https://github.com/nv-legate/cunumeric

  • Julia is the better language for extending Python
    13 projects | news.ycombinator.com | 19 Apr 2021
    Try dask

    Distribute your data and run everything as dask.delayed and then compute only at the end.

    Also check out legate.numpy from Nvidia which promises to be a drop in numpy replacement that will use all your CPU cores without any tweaks on your part.

    https://github.com/nv-legate/legate.numpy

  • Learning more about HPC as a python guy
    1 project | /r/HPC | 19 Apr 2021
    Something for the HPC tools category: https://github.com/nv-legate/legate.numpy
  • Unifying the CUDA Python Ecosystem
    13 projects | news.ycombinator.com | 16 Apr 2021
    You might be interested in Legate [1]. It supports the NumPy interface as a drop-in replacement, supports GPUs and also distributed machines. And you can see for yourself their performance results; they're not far off from hand-tuned MPI.

    [1]: https://github.com/nv-legate/legate.numpy

    Disclaimer: I work on the library Legate uses for distributed computing, but otherwise have no connection.

  • Legate NumPy: An Aspiring Drop-In Replacement for NumPy at Scale
    1 project | news.ycombinator.com | 13 Apr 2021

CudaPy

Posts with mentions or reviews of CudaPy. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-16.
  • Unifying the CUDA Python Ecosystem
    13 projects | news.ycombinator.com | 16 Apr 2021
    Closest thing to mind is Numba's cuda JIT compilation : https://numba.pydata.org/numba-doc/latest/cuda/index.html

    Then you have Cupy : https://github.com/oulgen/CudaPy

    But in my opinion, the most future proof solutions are higher level frameworks like Numpy, Jax and Tensorflow. TensorFlow can JIT compile Python functions to GPU (tf.function).

What are some alternatives?

When comparing cunumeric and CudaPy you can also consider the following projects:

cupy - NumPy & SciPy for GPU

CUDA.jl - CUDA programming in Julia.

cudf - cuDF - GPU DataFrame Library

numba - NumPy aware dynamic Python compiler using LLVM

wgpu-py - Next generation GPU API for Python

legate.pandas - An Aspiring Drop-In Replacement for Pandas at Scale

amaranth - A modern hardware definition language and toolchain based on Python

grcuda - Polyglot CUDA integration for the GraalVM

shared_numpy - A simple library for creating shared memory numpy arrays