GPUCompiler.jl VS SciMLBenchmarks.jl

Compare GPUCompiler.jl vs SciMLBenchmarks.jl and see what are their differences.

GPUCompiler.jl

Reusable compiler infrastructure for Julia GPU backends. (by JuliaGPU)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
GPUCompiler.jl SciMLBenchmarks.jl
5 10
146 289
3.4% 2.4%
8.5 9.6
5 days ago 5 days ago
Julia MATLAB
GNU General Public License v3.0 or later MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

GPUCompiler.jl

Posts with mentions or reviews of GPUCompiler.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-06.
  • Julia and GPU processing, how does it work?
    1 project | /r/Julia | 1 Jun 2022
  • GenieFramework – Web Development with Julia
    4 projects | news.ycombinator.com | 6 Apr 2022
  • We Use Julia, 10 Years Later
    10 projects | news.ycombinator.com | 14 Feb 2022
    I don't think it's frowned upon to compile, many people want this capability as well. If you had a program that could be proven to use no dynamic dispatch it would probably be feasible to compile it as a static binary. But as long as you have a tiny bit of dynamic behavior, you need the Julia runtime so currently a binary will be very large, with lots of theoretically unnecessary libraries bundled into it. There are already efforts like GPUCompiler[1] that do fixed-type compilation, there will be more in this space in the future.

    [1] https://github.com/JuliaGPU/GPUCompiler.jl

  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    Julia's compiler is made to be extendable. GPUCompiler.jl which adds the .ptx compilation output for example is a package (https://github.com/JuliaGPU/GPUCompiler.jl). The package manager of Julia itself... is an external package (https://github.com/JuliaLang/Pkg.jl). The built in SuiteSparse usage? That's a package too (https://github.com/JuliaLang/SuiteSparse.jl). It's fairly arbitrary what is "external" and "internal" in a language that allows that kind of extendability. Literally the only thing that makes these packages a standard library is that they are built into and shipped with the standard system image. Do you want to make your own distribution of Julia that changes what the "internal" packages are? Here's a tutorial that shows how to add plotting to the system image (https://julialang.github.io/PackageCompiler.jl/dev/examples/...). You could setup a binary server for that and now the first time to plot is 0.4 seconds.

    Julia's arrays system is built so that most arrays that are used are not the simple Base.Array. Instead Julia has an AbstractArray interface definition (https://docs.julialang.org/en/v1/manual/interfaces/#man-inte...) which the Base.Array conforms to, and many effectively standard library packages like StaticArrays.jl, OffsetArrays.jl, etc. conform to, and thus they can be used in any other Julia package, like the differential equation solvers, solving nonlinear systems, optimization libraries, etc. There is a higher chance that packages depend on these packages then that they do not. They are only not part of the Julia distribution because the core idea is to move everything possible out to packages. There's not only a plan to make SuiteSparse and sparse matrix support be a package in 2.0, but also ideas about making the rest of linear algebra and arrays themselves into packages where Julia just defines memory buffer intrinsic (with likely the Arrays.jl package still shipped with the default image). At that point, are arrays not built into the language? I can understand using such a narrow definition for systems like Fortran or C where the standard library is essentially a fixed concept, but that just does not make sense with Julia. It's inherently fuzzy.

  • Cuda.jl v3.3: union types, debug info, graph APIs
    8 projects | news.ycombinator.com | 13 Jun 2021
    A fun fact is that the GPUCompiler, which compiles the code to run in GPU's, is the current way to generate binaries without hiding the whole ~200mb of julia runtime in the binary.

    https://github.com/JuliaGPU/GPUCompiler.jl/ https://github.com/tshort/StaticCompiler.jl/

SciMLBenchmarks.jl

Posts with mentions or reviews of SciMLBenchmarks.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-01.
  • Can Fortran survive another 15 years?
    7 projects | news.ycombinator.com | 1 May 2023
    What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.

    > If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.

    It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....

    > you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations

    You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?

    I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).

    If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.

  • Why Fortran is a scientific powerhouse
    2 projects | news.ycombinator.com | 11 Jan 2023
    Project.toml or Manifest.toml? Every package has Project.toml which specifies bounds (https://github.com/SciML/OrdinaryDiffEq.jl/blob/master/Proje...). Every fully reproducible project has a Manifest that decrease the complete package state (https://github.com/SciML/SciMLBenchmarks.jl/blob/master/benc...).
  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    > But in the end, it's FORTRAN all the way down. Even in Julia.

    That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl

  • Twitter Thread: Symbolic Computing for Compiler Optimizations in Julia
    3 projects | /r/Julia | 3 Jan 2022
    Anything that continues to improve the SciMLBenchmarks of differential equation solvers, inverse problems, scientific machine learning, and equation discovery really. But there's a lot of other applications in mind, like generating compiler passes that improve floating point roundoff (like Herbie), a pure-Julia simple implementation of XLA-transformations for BLAS fusion, and a few others that are a bit more out there and will require a paper to describe the connection.
  • In 2022, the difference between symbolic computing and compiler optimizations will be erased in #julialang. Anyone who can come up with a set of symbolic mathematical rules will automatically receive an optimized compiler pass to build better code
    3 projects | /r/programmingcirclejerk | 2 Jan 2022
    Show me a single DAE solver in Haskell that has even come close to the performance we get in the Julia SciMLBenchmarks. Here's just one example. For Haskell pacakages, all I see are wrappers to GSL and Sundials, both of which are slow in comparison. So this is a 8.5x speedup over something that was already faster than what you could find in Haskell. Show me something with decent speed in DAEs or it's useless.
  • Tutorials for Learning Runge-Kutta Methods with Julia?
    5 projects | /r/Julia | 27 Dec 2021
    That's both a joke and a truth. The DifferentialEquations.jl source code, along with the SciMLBenchmarks and all of the associated documentation, is by far the most complete resource on all of this stuff at this point, for a reason. I've always treated it as "a lab notebook for the community" which is why that 8,000 lines of tableau code, the thousands of convergence tests, etc. are there. Papers have typos sometimes, things change with benchmarks over time, etc. But well-tested code tells you whether something actually converges and what the true performance is today.
  • [D] How important is Numerical Analysis for machine learning?
    2 projects | /r/MachineLearning | 23 Dec 2021
    Star-P was sold off to Microsoft IIRC. Some of the people who had interned there then joined Alan's lab. They created the Julia programming language where now parallelism and performance is directly built into the language. I created the differential equation solver libraries for the language which then used all of these properties to benchmark very well, and that's how I subsequently started working with Alan. Then we took this to build systems that combine machine learning and numerical solvers to accelerate and automatically discover physical systems, and the resulting SciML organization and the scientific machine learning research, along with compiler-level automatic differentiation and parallelism, is where all of that is today with the Julia Lab.
  • Julia 1.7 has been released
    15 projects | news.ycombinator.com | 30 Nov 2021
    https://homes.cs.washington.edu/~thickstn/ctpg-project-page/...

    That's all showing the raw iteration count to show that it algorithmically is faster, but the time per iteration is also fast for many reasons showcased in the SciMLBenchmarks routinely outperforming C and Fortran solvers (https://github.com/SciML/SciMLBenchmarks.jl). So it's excelling pretty well, and things like the automated discovery of black hole dynamics are all done using the universal differential equation framework enabled by the SciML tools (see https://arxiv.org/abs/2102.12695 for that application).

    What we are missing however is that, right now these simulations are all writing raw differential equations so we do need a better set of modeling tools. That said, MuJoCo and DiffTaichi are not great physical modeling environments for building real systems, instead we would point to Simulink and Modelica as what are really useful for building real-world systems. So it would be cool if there was a modeling language in Julia which extends that universe and directly does optimal code generation for the Julia solvers... and that's what ModelingToolkit.jl is (https://github.com/SciML/ModelingToolkit.jl). That project is still pretty new, but there's already enough to show some large-scale models outperforming Dymola on examples that require symbolic tearing and index reduction, which is far more than what physical simulation environments used for non-scientific purposes (MuJoCo and DiffTaichi) are able to do. See the workshop for details (https://www.youtube.com/watch?v=HEVOgSLBzWA). And that's just the top level details, there's a whole Julia Computing product called JuliaSim (https://juliacomputing.com/products/juliasim/) which is then being built on these pieces to do things like automatically generate ML-accelerated components and add model building GUIs.

    That said, MuJoCo and DiffTaichi have much better visualizations and animations than MTK. Our focus so far has been on the core routines, making them fast, scalable, stable, and extensive. You'll need to wait for the near future (or build something with Makie) if you want the pretty pictures of the robot to happen automatically. That said, Julia's Makie visualization system has already been shown to be sufficiently powerful for this kind of application (https://nextjournal.com/sdanisch/taking-your-robot-for-a-wal...), so we're excited to see where that will go in the future.

  • Is Julia suitable for computational physics?
    4 projects | /r/Julia | 5 Jan 2021
    Most of the SciML organization is dedicated to research and production level scientific computing for domains like physical systems, chemical reactions, and systems biology (and more of course). The differential equation benchmarks are quite good in comparison to a lot of C++ and Fortran libraries, there's modern neural PDE solvers, pervasive automatic differentiation, automated GPU and distributed parallelism, SDE solvers, DDE solvers, DAE solvers, ModelingToolkit.jl for Modelica-like symbolic transformations for higher index DAEs, Bayesian differential equations, etc. All of that then ties into big PDE solving. You get the picture.

What are some alternatives?

When comparing GPUCompiler.jl and SciMLBenchmarks.jl you can also consider the following projects:

KernelAbstractions.jl - Heterogeneous programming in Julia

DifferentialEquations.jl - Multi-language suite for high-performance solvers of differential equations and scientific machine learning (SciML) components. Ordinary differential equations (ODEs), stochastic differential equations (SDEs), delay differential equations (DDEs), differential-algebraic equations (DAEs), and more in Julia.

CUDA.jl - CUDA programming in Julia.

SciMLTutorials.jl - Tutorials for doing scientific machine learning (SciML) and high-performance differential equation solving with open source software.

StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)

julia - The Julia Programming Language

Vulkan.jl - Using Vulkan from Julia

ApproxFun.jl - Julia package for function approximation

oneAPI.jl - Julia support for the oneAPI programming toolkit.

Diffractor.jl - Next-generation AD

LoopVectorization.jl - Macro(s) for vectorizing loops.

RecursiveFactorization.jl