CUDA.jl
cupynumeric
CUDA.jl | cupynumeric | |
---|---|---|
15 | 10 | |
1,263 | 844 | |
2.3% | 2.6% | |
9.5 | 2.2 | |
7 days ago | 2 days ago | |
Julia | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CUDA.jl
-
Ask HN: Best way to learn GPU programming?
It would also mean learning Julia, but you can write GPU kernels in Julia and then compile for NVidia CUDA, AMD ROCm or IBM oneAPI.
https://juliagpu.org/
I've written CUDA kernels and I knew nothing about it going in.
- What's your main programming language?
-
How is Julia Performance with GPUs (for LLMs)?
See https://juliagpu.org/
- Yann Lecun: ML would have advanced if other lang had been adopted versus Python
-
C++ is making me depressed / CUDA question
If you just want to do some numerical code that requires linear algebra and GPU, your best bet would be Julia or Python+JAX.
-
Parallélisation distribuée presque triviale d’applications GPU et CPU basées sur des Stencils avec…
GitHub - JuliaGPU/CUDA.jl: CUDA programming in Julia.
- Why Fortran is easy to learn
-
Generic GPU Kernels
Should have (2017) in the title.
Indeed cool to program julia directly on the GPU and Julia on GPU and this has further evolved since then, see https://juliagpu.org/
-
Announcing The Rust CUDA Project; An ecosystem of crates and tools for writing and executing extremely fast GPU code fully in Rust
I'm excited to eventually see something like JuliaGPU with support for multiple backends.
-
[Media] 100% Rust path tracer running on CPU, GPU (CUDA), and OptiX (for denoising) using one of my upcoming projects. There is no C/C++ code at all, the program shares a single rust crate for the core raytracer and uses rust for the viewer and renderer.
That's really cool! Have you looked at CUDA.jl for the Julia language? Maybe you could take some ideas from there. I am pretty sure it does the same thing you do here, and they support any arbitrary code with the limitations that you cannot allocate memory, I/O is disallowed, and badly-typed code(dynamic) will not compile.
cupynumeric
-
CuPy: NumPy and SciPy for GPU
If you like cupy, definitely checkout the Multinode Multi-gpu version, cuNumeric: https://github.com/nv-legate/cunumeric
Would love to get any feedback from the community.
- Announcing Chapel 1.32
-
Is Parallel Programming Hard, and, If So, What Can You Do About It? [pdf]
I am biased because this is my research area, but I have to respectfully disagree. Actor models are awful, and the only reason it's not obvious is because everything else is even more awful.
But if you look at e.g., the recent work on task-based models, you'll see that you can have literally sequential programs that parallelize automatically. No message passing, no synchronization, no data races, no deadlocks. Read your programs as if they're sequential, and you immediately understand their semantics. Some of these systems are able to scale to thousands of nodes.
An interesting example of this is cuNumeric, which allows you to take sequential Python programs that use NumPy, and by changing one line (the import statement), run automatically on clusters of GPUs. It is 100% pure awesomeness.
https://github.com/nv-legate/cunumeric
(I don't work on cuNumeric, but I do work on the runtime framework that cuNumeric uses.)
-
GPT in 60 Lines of NumPy
I know this probably isn't intended for performance, but it would be fun to run this in cuNumeric [1] and see how it scales.
[1]: https://github.com/nv-legate/cunumeric
-
Dask – a flexible library for parallel computing in Python
If you want built-in GPU support (and distributed), you should check out cuNumeric (released by NVIDIA in the last week or so). Also avoids needing to manually specify chunk sizes, like it says in a sibling comment.
https://github.com/nv-legate/cunumeric
-
Julia is the better language for extending Python
Try dask
Distribute your data and run everything as dask.delayed and then compute only at the end.
Also check out legate.numpy from Nvidia which promises to be a drop in numpy replacement that will use all your CPU cores without any tweaks on your part.
https://github.com/nv-legate/legate.numpy
-
Learning more about HPC as a python guy
Something for the HPC tools category: https://github.com/nv-legate/legate.numpy
-
Unifying the CUDA Python Ecosystem
You might be interested in Legate [1]. It supports the NumPy interface as a drop-in replacement, supports GPUs and also distributed machines. And you can see for yourself their performance results; they're not far off from hand-tuned MPI.
[1]: https://github.com/nv-legate/legate.numpy
Disclaimer: I work on the library Legate uses for distributed computing, but otherwise have no connection.
- Legate NumPy: An Aspiring Drop-In Replacement for NumPy at Scale
What are some alternatives?
GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.
cupy - NumPy & SciPy for GPU
awesome-quant - A curated list of insanely awesome libraries, packages and resources for Quants (Quantitative Finance)
numba - NumPy aware dynamic Python compiler using LLVM
CudaPy - CudaPy is a runtime library that lets Python programmers access NVIDIA's CUDA parallel computation API.
grcuda - Polyglot CUDA integration for the GraalVM