cunumeric
CudaPy
cunumeric | CudaPy | |
---|---|---|
9 | 1 | |
595 | 4 | |
0.0% | - | |
8.5 | 0.0 | |
1 day ago | over 8 years ago | |
Python | Haskell | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cunumeric
- Announcing Chapel 1.32
-
Is Parallel Programming Hard, and, If So, What Can You Do About It? [pdf]
I am biased because this is my research area, but I have to respectfully disagree. Actor models are awful, and the only reason it's not obvious is because everything else is even more awful.
But if you look at e.g., the recent work on task-based models, you'll see that you can have literally sequential programs that parallelize automatically. No message passing, no synchronization, no data races, no deadlocks. Read your programs as if they're sequential, and you immediately understand their semantics. Some of these systems are able to scale to thousands of nodes.
An interesting example of this is cuNumeric, which allows you to take sequential Python programs that use NumPy, and by changing one line (the import statement), run automatically on clusters of GPUs. It is 100% pure awesomeness.
https://github.com/nv-legate/cunumeric
(I don't work on cuNumeric, but I do work on the runtime framework that cuNumeric uses.)
-
GPT in 60 Lines of NumPy
I know this probably isn't intended for performance, but it would be fun to run this in cuNumeric [1] and see how it scales.
[1]: https://github.com/nv-legate/cunumeric
-
Dask – a flexible library for parallel computing in Python
If you want built-in GPU support (and distributed), you should check out cuNumeric (released by NVIDIA in the last week or so). Also avoids needing to manually specify chunk sizes, like it says in a sibling comment.
https://github.com/nv-legate/cunumeric
-
Julia is the better language for extending Python
Try dask
Distribute your data and run everything as dask.delayed and then compute only at the end.
Also check out legate.numpy from Nvidia which promises to be a drop in numpy replacement that will use all your CPU cores without any tweaks on your part.
https://github.com/nv-legate/legate.numpy
-
Learning more about HPC as a python guy
Something for the HPC tools category: https://github.com/nv-legate/legate.numpy
-
Unifying the CUDA Python Ecosystem
You might be interested in Legate [1]. It supports the NumPy interface as a drop-in replacement, supports GPUs and also distributed machines. And you can see for yourself their performance results; they're not far off from hand-tuned MPI.
[1]: https://github.com/nv-legate/legate.numpy
Disclaimer: I work on the library Legate uses for distributed computing, but otherwise have no connection.
- Legate NumPy: An Aspiring Drop-In Replacement for NumPy at Scale
CudaPy
-
Unifying the CUDA Python Ecosystem
Closest thing to mind is Numba's cuda JIT compilation : https://numba.pydata.org/numba-doc/latest/cuda/index.html
Then you have Cupy : https://github.com/oulgen/CudaPy
But in my opinion, the most future proof solutions are higher level frameworks like Numpy, Jax and Tensorflow. TensorFlow can JIT compile Python functions to GPU (tf.function).
What are some alternatives?
cupy - NumPy & SciPy for GPU
CUDA.jl - CUDA programming in Julia.
cudf - cuDF - GPU DataFrame Library
numba - NumPy aware dynamic Python compiler using LLVM
wgpu-py - Next generation GPU API for Python
legate.pandas - An Aspiring Drop-In Replacement for Pandas at Scale
amaranth - A modern hardware definition language and toolchain based on Python
grcuda - Polyglot CUDA integration for the GraalVM
shared_numpy - A simple library for creating shared memory numpy arrays