KernelAbstractions.jl VS AMDGPU.jl

Compare KernelAbstractions.jl vs AMDGPU.jl and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
KernelAbstractions.jl AMDGPU.jl
4 6
331 264
3.0% 1.9%
8.0 9.1
12 days ago 6 days ago
Julia Julia
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

KernelAbstractions.jl

Posts with mentions or reviews of KernelAbstractions.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-12.
  • Why is AMD leaving ML to nVidia?
    9 projects | /r/Amd | 12 Apr 2023
    For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.
  • Generic GPU Kernels
    7 projects | news.ycombinator.com | 6 Dec 2021
    >Higher level abstractions

    like these?

    https://github.com/JuliaGPU/KernelAbstractions.jl

  • Cuda.jl v3.3: union types, debug info, graph APIs
    8 projects | news.ycombinator.com | 13 Jun 2021
    For kernel programming, https://github.com/JuliaGPU/KernelAbstractions.jl (shortened to KA) is what the JuliaGPU team has been developing as a unified programming interface for GPUs of any flavor. It's not significantly different from the (basically identical) interfaces exposed by CUDA.jl and AMDGPU.jl, so it's easy to transition to. I think the event system in KA is also far superior to CUDA's native synchronization system, since it allows one to easily express graphs of dependencies between kernels and data transfers.

AMDGPU.jl

Posts with mentions or reviews of AMDGPU.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-12.
  • Why is AMD leaving ML to nVidia?
    9 projects | /r/Amd | 12 Apr 2023
    For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.
  • [GUIDE] How to install ROCm for GPU Julia programming via Distrobox
    3 projects | /r/steamdeck_linux | 3 Jan 2023
    The Julia package AMDGPU.jl provides a Julia interface for AMD GPU (ROCm) programming. As they say, the package is being developed for Julia 1.7, 1.9 and above, but not 1.8. Therefore I downloaded the Julia binary of version 1.7.3 from the older releases Julia page.
  • First True Exascale Supercomputer
    2 projects | news.ycombinator.com | 6 Jul 2022
    This is exciting news! What's also exciting is that it's not just C++ that can run on this supercomputer; there is also good (currently unofficial) support for programming those GPUs from Julia, via the AMDGPU.jl library (note: I am the author/maintainer of this library). Some of our users have been able to run AMDGPU.jl's testsuite on the Crusher test system (which is an attached testing system with the same hardware configuration as Frontier), as well as their own domain-specific programs that use AMDGPU.jl.

    What's nice about programming GPUs in Julia is that you can write code once and execute it on multiple kinds of GPUs, with excellent performance. The KernelAbstractions.jl library makes this possible for compute kernels by acting as a frontend to AMDGPU.jl, CUDA.jl, and soon Metal.jl and oneAPI.jl, allowing a single piece of code to be portable to AMD, NVIDIA, Intel, and Apple GPUs, and also CPUs. Similarly, the GPUArrays.jl library allows the same behavior for idiomatic array operations, and will automatically dispatch calls to BLAS, FFT, RNG, linear solver, and DNN vendor-provided libraries when appropriate.

    I'm personally looking forward to helping researchers get their Julia code up and running on Frontier so that we can push scientific computing to the max!

    Library link: <https://github.com/JuliaGPU/AMDGPU.jl>

  • IA et Calcul scientifique dans Kubernetes avec le langage Julia, K8sClusterManagers.jl
    11 projects | dev.to | 12 Mar 2022
    GitHub - JuliaGPU/AMDGPU.jl: AMD GPU (ROCm) programming in Julia
  • Cuda.jl v3.3: union types, debug info, graph APIs
    8 projects | news.ycombinator.com | 13 Jun 2021
    https://github.com/JuliaGPU/AMDGPU.jl

    https://github.com/JuliaGPU/oneAPI.jl

    These are both less mature than CUDA.jl, but are in active development.

  • Unified programming model for all devices – will it catch on?
    2 projects | news.ycombinator.com | 1 Mar 2021

What are some alternatives?

When comparing KernelAbstractions.jl and AMDGPU.jl you can also consider the following projects:

GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.

Vulkan.jl - Using Vulkan from Julia

ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]

oneAPI.jl - Julia support for the oneAPI programming toolkit.

StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)

NeuralPDE.jl - Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation

Agents.jl - Agent-based modeling framework in Julia

FoldsCUDA.jl - Data-parallelism on CUDA using Transducers.jl and for loops (FLoops.jl)

julia-distributed-computing - The ultimate guide to distributed computing in Julia