KernelAbstractions.jl
oneAPI.jl
Our great sponsors
KernelAbstractions.jl | oneAPI.jl | |
---|---|---|
4 | 4 | |
331 | 173 | |
3.0% | 2.3% | |
8.0 | 8.1 | |
12 days ago | 10 days ago | |
Julia | Julia | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
KernelAbstractions.jl
-
Why is AMD leaving ML to nVidia?
For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.
-
Generic GPU Kernels
>Higher level abstractions
like these?
https://github.com/JuliaGPU/KernelAbstractions.jl
-
Cuda.jl v3.3: union types, debug info, graph APIs
For kernel programming, https://github.com/JuliaGPU/KernelAbstractions.jl (shortened to KA) is what the JuliaGPU team has been developing as a unified programming interface for GPUs of any flavor. It's not significantly different from the (basically identical) interfaces exposed by CUDA.jl and AMDGPU.jl, so it's easy to transition to. I think the event system in KA is also far superior to CUDA's native synchronization system, since it allows one to easily express graphs of dependencies between kernels and data transfers.
oneAPI.jl
-
GPU vendor-agnostic fluid dynamics solver in Julia
https://github.com/JuliaGPU/oneAPI.jl
As for syntax, Julia syntax scales from a scripting language to a fully typed language. You can write valid and performant code without specifying any types, but you can also specialize methods for specific types. The type notation uses `::`. The types also have parameters in the curly brackets. The other aspect that makes this specific example complicated is the use of Lisp-like macros which starts with `@`. These allow for code transformation as I described earlier. The last aspect is that the author is making extensive use of Unicode. This is purely optional as you can write Julia with just ASCII. Some authors like to use `ε` instead of `in`.
- Writing GPU shaders in Julia?
-
Cuda.jl v3.3: union types, debug info, graph APIs
https://github.com/JuliaGPU/AMDGPU.jl
https://github.com/JuliaGPU/oneAPI.jl
These are both less mature than CUDA.jl, but are in active development.
-
Unified programming model for all devices – will it catch on?
OpenCL and various other solutions basically require that one writes kernels in C/C++. This is an unfortunate limitation, and can make it hard for less experienced users (researchers especially) to write correct and performant GPU code, since neither language lends itself to writing many mathematical and scientific models in a clean, maintainable manner (in my opinion).
What oneAPI (the runtime), and also AMD's ROCm (specifically the ROCR runtime), do that is new is that they enable packages like oneAPI.jl [1] and AMDGPU.jl [2] to exist (both Julia packages), without having to go through OpenCL or C++ transpilation (which we've tried out before, and it's quite painful). This is a great thing, because now users of an entirely different language can still utilize their GPUs effectively and with near-optimal performance (optimal w.r.t what the device can reasonably attain).
[1] https://github.com/JuliaGPU/oneAPI.jl
What are some alternatives?
GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
Vulkan.jl - Using Vulkan from Julia
AMDGPU.jl - AMD GPU (ROCm) programming in Julia
Makie.jl - Interactive data visualizations and plotting in Julia
StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)
Agents.jl - Agent-based modeling framework in Julia
FoldsCUDA.jl - Data-parallelism on CUDA using Transducers.jl and for loops (FLoops.jl)