numba
grcuda
numba | grcuda | |
---|---|---|
1 | 2 | |
5 | 217 | |
- | 0.0% | |
0.0 | 0.0 | |
6 days ago | 10 months ago | |
Python | Java | |
BSD 2-clause "Simplified" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
numba
-
Unifying the CUDA Python Ecosystem
that project might be abandoned but this strategy is used in nvidia and nvidia adjacent projects (through llvm):
https://github.com/rapidsai/cudf/blob/branch-0.20/python/cud...
https://github.com/gmarkall/numba/blob/master/numba/cuda/com...
>but we also need high level expressibility that doesn't require writing kernels in C
the above are possible because C is actually just a frontend to PTX
https://docs.nvidia.com/cuda/parallel-thread-execution/index...
fundamentally you are not going to ever be able to have a way to write cuda kernels without thinking about cuda architecture anymore so than you'll ever be able to write async code without thinking about concurrency.
grcuda
What are some alternatives?
cunumeric - An Aspiring Drop-In Replacement for NumPy at Scale
cudf - cuDF - GPU DataFrame Library
CUDA.jl - CUDA programming in Julia.
CudaPy - CudaPy is a runtime library that lets Python programmers access NVIDIA's CUDA parallel computation API.
wgpu-py - Next generation GPU API for Python
copperhead - Data Parallel Python
intel-graphics-compiler
gtc2017-numba - Numba tutorial for GTC 2017 conference
amaranth - A modern hardware definition language and toolchain based on Python
compute-runtime - Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver