GPUCompiler.jl VS oneAPI.jl

Compare GPUCompiler.jl vs oneAPI.jl and see what are their differences.

GPUCompiler.jl

Reusable compiler infrastructure for Julia GPU backends. (by JuliaGPU)

oneAPI.jl

Julia support for the oneAPI programming toolkit. (by JuliaGPU)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
GPUCompiler.jl oneAPI.jl
5 4
143 173
1.4% 2.3%
8.5 8.1
7 days ago 3 days ago
Julia Julia
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

GPUCompiler.jl

Posts with mentions or reviews of GPUCompiler.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-06.
  • Julia and GPU processing, how does it work?
    1 project | /r/Julia | 1 Jun 2022
  • GenieFramework – Web Development with Julia
    4 projects | news.ycombinator.com | 6 Apr 2022
  • We Use Julia, 10 Years Later
    10 projects | news.ycombinator.com | 14 Feb 2022
    I don't think it's frowned upon to compile, many people want this capability as well. If you had a program that could be proven to use no dynamic dispatch it would probably be feasible to compile it as a static binary. But as long as you have a tiny bit of dynamic behavior, you need the Julia runtime so currently a binary will be very large, with lots of theoretically unnecessary libraries bundled into it. There are already efforts like GPUCompiler[1] that do fixed-type compilation, there will be more in this space in the future.

    [1] https://github.com/JuliaGPU/GPUCompiler.jl

  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    Julia's compiler is made to be extendable. GPUCompiler.jl which adds the .ptx compilation output for example is a package (https://github.com/JuliaGPU/GPUCompiler.jl). The package manager of Julia itself... is an external package (https://github.com/JuliaLang/Pkg.jl). The built in SuiteSparse usage? That's a package too (https://github.com/JuliaLang/SuiteSparse.jl). It's fairly arbitrary what is "external" and "internal" in a language that allows that kind of extendability. Literally the only thing that makes these packages a standard library is that they are built into and shipped with the standard system image. Do you want to make your own distribution of Julia that changes what the "internal" packages are? Here's a tutorial that shows how to add plotting to the system image (https://julialang.github.io/PackageCompiler.jl/dev/examples/...). You could setup a binary server for that and now the first time to plot is 0.4 seconds.

    Julia's arrays system is built so that most arrays that are used are not the simple Base.Array. Instead Julia has an AbstractArray interface definition (https://docs.julialang.org/en/v1/manual/interfaces/#man-inte...) which the Base.Array conforms to, and many effectively standard library packages like StaticArrays.jl, OffsetArrays.jl, etc. conform to, and thus they can be used in any other Julia package, like the differential equation solvers, solving nonlinear systems, optimization libraries, etc. There is a higher chance that packages depend on these packages then that they do not. They are only not part of the Julia distribution because the core idea is to move everything possible out to packages. There's not only a plan to make SuiteSparse and sparse matrix support be a package in 2.0, but also ideas about making the rest of linear algebra and arrays themselves into packages where Julia just defines memory buffer intrinsic (with likely the Arrays.jl package still shipped with the default image). At that point, are arrays not built into the language? I can understand using such a narrow definition for systems like Fortran or C where the standard library is essentially a fixed concept, but that just does not make sense with Julia. It's inherently fuzzy.

  • Cuda.jl v3.3: union types, debug info, graph APIs
    8 projects | news.ycombinator.com | 13 Jun 2021
    A fun fact is that the GPUCompiler, which compiles the code to run in GPU's, is the current way to generate binaries without hiding the whole ~200mb of julia runtime in the binary.

    https://github.com/JuliaGPU/GPUCompiler.jl/ https://github.com/tshort/StaticCompiler.jl/

oneAPI.jl

Posts with mentions or reviews of oneAPI.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-08.
  • GPU vendor-agnostic fluid dynamics solver in Julia
    11 projects | news.ycombinator.com | 8 May 2023
    https://github.com/JuliaGPU/oneAPI.jl

    As for syntax, Julia syntax scales from a scripting language to a fully typed language. You can write valid and performant code without specifying any types, but you can also specialize methods for specific types. The type notation uses `::`. The types also have parameters in the curly brackets. The other aspect that makes this specific example complicated is the use of Lisp-like macros which starts with `@`. These allow for code transformation as I described earlier. The last aspect is that the author is making extensive use of Unicode. This is purely optional as you can write Julia with just ASCII. Some authors like to use `ε` instead of `in`.

  • Writing GPU shaders in Julia?
    1 project | /r/Julia | 17 Feb 2022
  • Cuda.jl v3.3: union types, debug info, graph APIs
    8 projects | news.ycombinator.com | 13 Jun 2021
    https://github.com/JuliaGPU/AMDGPU.jl

    https://github.com/JuliaGPU/oneAPI.jl

    These are both less mature than CUDA.jl, but are in active development.

  • Unified programming model for all devices – will it catch on?
    2 projects | news.ycombinator.com | 1 Mar 2021
    OpenCL and various other solutions basically require that one writes kernels in C/C++. This is an unfortunate limitation, and can make it hard for less experienced users (researchers especially) to write correct and performant GPU code, since neither language lends itself to writing many mathematical and scientific models in a clean, maintainable manner (in my opinion).

    What oneAPI (the runtime), and also AMD's ROCm (specifically the ROCR runtime), do that is new is that they enable packages like oneAPI.jl [1] and AMDGPU.jl [2] to exist (both Julia packages), without having to go through OpenCL or C++ transpilation (which we've tried out before, and it's quite painful). This is a great thing, because now users of an entirely different language can still utilize their GPUs effectively and with near-optimal performance (optimal w.r.t what the device can reasonably attain).

    [1] https://github.com/JuliaGPU/oneAPI.jl

What are some alternatives?

When comparing GPUCompiler.jl and oneAPI.jl you can also consider the following projects:

KernelAbstractions.jl - Heterogeneous programming in Julia

ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]

CUDA.jl - CUDA programming in Julia.

Vulkan.jl - Using Vulkan from Julia

StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)

Makie.jl - Interactive data visualizations and plotting in Julia

LoopVectorization.jl - Macro(s) for vectorizing loops.

AMDGPU.jl - AMD GPU (ROCm) programming in Julia