Vulkan.jl
KernelAbstractions.jl
Vulkan.jl | KernelAbstractions.jl | |
---|---|---|
2 | 4 | |
106 | 334 | |
0.0% | 2.4% | |
8.0 | 7.9 | |
4 months ago | 5 days ago | |
Julia | Julia | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Vulkan.jl
-
GPU vendor-agnostic fluid dynamics solver in Julia
You may be confusing front end APIs and the compiler backends.
Julia is flexible enough that you can essentially define domain specific languages within Julia for certain applications. In this case, we are using Julia as an abstract front end and then deferring the concrete interface to vendor specific GPU compilation drivers. Part of what permits this is that Julia is a LLVM front end and many of the vendor drivers include LLVM-based backends. With some transformation of the Julia abstract syntax tree and the LLVM IR we can connect the two.
That said we are mostly dependent on vendors providing the backend compiler technology. When they do, we can bridge Julia to use that interface. We can wrap Vulkan and technologies like oneAPI.
https://github.com/JuliaGPU/Vulkan.jl
- Cuda.jl v3.3: union types, debug info, graph APIs
KernelAbstractions.jl
-
Why is AMD leaving ML to nVidia?
For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.
-
Generic GPU Kernels
>Higher level abstractions
like these?
https://github.com/JuliaGPU/KernelAbstractions.jl
-
Cuda.jl v3.3: union types, debug info, graph APIs
For kernel programming, https://github.com/JuliaGPU/KernelAbstractions.jl (shortened to KA) is what the JuliaGPU team has been developing as a unified programming interface for GPUs of any flavor. It's not significantly different from the (basically identical) interfaces exposed by CUDA.jl and AMDGPU.jl, so it's easy to transition to. I think the event system in KA is also far superior to CUDA's native synchronization system, since it allows one to easily express graphs of dependencies between kernels and data transfers.
What are some alternatives?
oneAPI.jl - Julia support for the oneAPI programming toolkit.
GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.
AMDGPU.jl - AMD GPU (ROCm) programming in Julia
ROCm - AMD ROCmâ„¢ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
www.julialang.org - Julia Project website
Agents.jl - Agent-based modeling framework in Julia