grcuda
numba
grcuda | numba | |
---|---|---|
2 | 1 | |
217 | 5 | |
0.0% | - | |
0.0 | 0.0 | |
10 months ago | 7 days ago | |
Java | Python | |
GNU General Public License v3.0 or later | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
grcuda
numba
-
Unifying the CUDA Python Ecosystem
that project might be abandoned but this strategy is used in nvidia and nvidia adjacent projects (through llvm):
https://github.com/rapidsai/cudf/blob/branch-0.20/python/cud...
https://github.com/gmarkall/numba/blob/master/numba/cuda/com...
>but we also need high level expressibility that doesn't require writing kernels in C
the above are possible because C is actually just a frontend to PTX
https://docs.nvidia.com/cuda/parallel-thread-execution/index...
fundamentally you are not going to ever be able to have a way to write cuda kernels without thinking about cuda architecture anymore so than you'll ever be able to write async code without thinking about concurrency.
What are some alternatives?
cudf - cuDF - GPU DataFrame Library
cunumeric - An Aspiring Drop-In Replacement for NumPy at Scale
CUDA.jl - CUDA programming in Julia.
wgpu-py - Next generation GPU API for Python
CudaPy - CudaPy is a runtime library that lets Python programmers access NVIDIA's CUDA parallel computation API.
intel-graphics-compiler
copperhead - Data Parallel Python
gtc2017-numba - Numba tutorial for GTC 2017 conference
amaranth - A modern hardware definition language and toolchain based on Python
compute-runtime - Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver