GPUCompiler.jl VS 18337

Compare GPUCompiler.jl vs 18337 and see what are their differences.

GPUCompiler.jl

Reusable compiler infrastructure for Julia GPU backends. (by JuliaGPU)

18337

18.337 - Parallel Computing and Scientific Machine Learning (by mitmath)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
GPUCompiler.jl 18337
5 14
146 189
3.4% 7.4%
8.5 5.7
8 days ago about 1 year ago
Julia Jupyter Notebook
GNU General Public License v3.0 or later -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

GPUCompiler.jl

Posts with mentions or reviews of GPUCompiler.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-06.
  • Julia and GPU processing, how does it work?
    1 project | /r/Julia | 1 Jun 2022
  • GenieFramework – Web Development with Julia
    4 projects | news.ycombinator.com | 6 Apr 2022
  • We Use Julia, 10 Years Later
    10 projects | news.ycombinator.com | 14 Feb 2022
    I don't think it's frowned upon to compile, many people want this capability as well. If you had a program that could be proven to use no dynamic dispatch it would probably be feasible to compile it as a static binary. But as long as you have a tiny bit of dynamic behavior, you need the Julia runtime so currently a binary will be very large, with lots of theoretically unnecessary libraries bundled into it. There are already efforts like GPUCompiler[1] that do fixed-type compilation, there will be more in this space in the future.

    [1] https://github.com/JuliaGPU/GPUCompiler.jl

  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    Julia's compiler is made to be extendable. GPUCompiler.jl which adds the .ptx compilation output for example is a package (https://github.com/JuliaGPU/GPUCompiler.jl). The package manager of Julia itself... is an external package (https://github.com/JuliaLang/Pkg.jl). The built in SuiteSparse usage? That's a package too (https://github.com/JuliaLang/SuiteSparse.jl). It's fairly arbitrary what is "external" and "internal" in a language that allows that kind of extendability. Literally the only thing that makes these packages a standard library is that they are built into and shipped with the standard system image. Do you want to make your own distribution of Julia that changes what the "internal" packages are? Here's a tutorial that shows how to add plotting to the system image (https://julialang.github.io/PackageCompiler.jl/dev/examples/...). You could setup a binary server for that and now the first time to plot is 0.4 seconds.

    Julia's arrays system is built so that most arrays that are used are not the simple Base.Array. Instead Julia has an AbstractArray interface definition (https://docs.julialang.org/en/v1/manual/interfaces/#man-inte...) which the Base.Array conforms to, and many effectively standard library packages like StaticArrays.jl, OffsetArrays.jl, etc. conform to, and thus they can be used in any other Julia package, like the differential equation solvers, solving nonlinear systems, optimization libraries, etc. There is a higher chance that packages depend on these packages then that they do not. They are only not part of the Julia distribution because the core idea is to move everything possible out to packages. There's not only a plan to make SuiteSparse and sparse matrix support be a package in 2.0, but also ideas about making the rest of linear algebra and arrays themselves into packages where Julia just defines memory buffer intrinsic (with likely the Arrays.jl package still shipped with the default image). At that point, are arrays not built into the language? I can understand using such a narrow definition for systems like Fortran or C where the standard library is essentially a fixed concept, but that just does not make sense with Julia. It's inherently fuzzy.

  • Cuda.jl v3.3: union types, debug info, graph APIs
    8 projects | news.ycombinator.com | 13 Jun 2021
    A fun fact is that the GPUCompiler, which compiles the code to run in GPU's, is the current way to generate binaries without hiding the whole ~200mb of julia runtime in the binary.

    https://github.com/JuliaGPU/GPUCompiler.jl/ https://github.com/tshort/StaticCompiler.jl/

18337

Posts with mentions or reviews of 18337. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-01-31.
  • Hello I wanted to know what would be the best way to get started in Julia and artificial intelligence. I looked around alot of different languages and saw Julia was good for data science and for artificial intelligence but would like to know what would be good ways to just do it. Thank you
    1 project | /r/Julia | 13 Mar 2022
  • SciML/SciMLBook: Parallel Computing and Scientific Machine Learning (SciML): Methods and Applications (MIT 18.337J/6.338J)
    4 projects | /r/Julia | 31 Jan 2022
    This was previously the https://github.com/mitmath/18337 course website, but now in a new iteration of the course it is being reset. To avoid issues like this in the future, we have moved the "book" out to its own repository, https://github.com/SciML/SciMLBook, where it can continue to grow and be hosted separately from the structure of a course. This means it can be something other courses can depend on as well. I am looking for web developers who can help build a nicer webpage for this book, and also for the SciMLBenchmarks.
  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    I would say Fortran is a pretty great language for teaching beginners in numerical analysis courses. The only issue I have with it is that, similar to using C+MPI (which is what I first learned with, well after a bit of Java), the students don't tend to learn how to go "higher level". You teach them how to write a three loop matrix-matrix multiplication, but the next thing you should teach is how to use higher level BLAS tools and why that will outperform the 3-loop form. But Fortran then becomes very cumbersome (`dgemm` etc.) so students continue to write simple loops and simple algorithms where they shouldn't. A first numerical analysis course should teach simple algorithms AND why the simple algorithms are not good, but a lot of instructors and tools fail to emphasize the second part of that statement.

    On the other hand, the performance + high level nature of Julia makes it a rather excellent tool for this. In MIT graduate course 18.337 Parallel Computing and Scientific Machine Learning (https://github.com/mitmath/18337) we do precisely that, starting with direct optimization of loops, then moving to linear algebra, ODE solving, and implementing automatic differentiation. I don't think anyone would want to give a homework assignment to implement AD in Fortran, but in Julia you can do that as something shortly after looking at loop performance and SIMD, and that's really something special. Steven Johnson's 18.335 graduate course in Numerical Analysis (https://github.com/mitmath/18335) showcases some similar niceties. I really like this demonstration where it starts from scratch with the 3 loops and shows how SIMD and cache-oblivious algorithms build towards BLAS performance, and why most users should ultimately not be writing such loops (https://nbviewer.org/github/mitmath/18335/blob/master/notes/...) and should instead use the built-in `mul!` in most scenarios. There's very few languages where such "start to finish" demonstrations can really be showcased in a nice clear fashion.

  • What are some interesting papers to read?
    2 projects | /r/Julia | 22 Nov 2021
    And why not take a course while you're at it.
  • Composability in Julia: Implementing Deep Equilibrium Models via Neural Odes
    2 projects | news.ycombinator.com | 21 Oct 2021
  • [2109.12449] AbstractDifferentiation.jl: Backend-Agnostic Differentiable Programming in Julia
    1 project | /r/Julia | 28 Sep 2021
  • Is that true?
    6 projects | /r/ProgrammerHumor | 8 Aug 2021
    Here's a good one. It's in Julia but it should do the trick. The main instructor is the most prolific Julia dev in the world.
  • [D] Has anyone worked with Physics Informed Neural Networks (PINNs)?
    3 projects | /r/MachineLearning | 21 May 2021
    NeuralPDE.jl fully automates the approach (and extensions of it, which are required to make it solve practical problems) from symbolic descriptions of PDEs, so that might be a good starting point to both learn the practical applications and get something running in a few minutes. As part of MIT 18.337 Parallel Computing and Scientific Machine Learning I gave an early lecture on physics-informed neural networks (with a two part video) describing the approach, how it works and what its challenges are. You might find those resources enlightening.
  • [P] Machine Learning in Physics?
    1 project | /r/MachineLearning | 13 May 2021
    It's a very thriving field. If you are interested in methods research and want to learn some of the techniques behind it, I would recommend taking a dive into my lecture notes as I taught a graduate course at MIT, 18.337 Parallel Computing and Scientific Machine Learning, specifically designed to get new students onboarded into this research program.
  • MIT 18.337J: Parallel Computing and Scientific Machine Learning
    1 project | news.ycombinator.com | 19 Mar 2021

What are some alternatives?

When comparing GPUCompiler.jl and 18337 you can also consider the following projects:

KernelAbstractions.jl - Heterogeneous programming in Julia

DataDrivenDiffEq.jl - Data driven modeling and automated discovery of dynamical systems for the SciML Scientific Machine Learning organization

CUDA.jl - CUDA programming in Julia.

Vulpix - Fast, unopinionated, minimalist web framework for .NET core inspired by express.js

StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)

NeuralPDE.jl - Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation

Vulkan.jl - Using Vulkan from Julia

SciMLTutorials.jl - Tutorials for doing scientific machine learning (SciML) and high-performance differential equation solving with open source software.

oneAPI.jl - Julia support for the oneAPI programming toolkit.

MPI.jl - MPI wrappers for Julia

LoopVectorization.jl - Macro(s) for vectorizing loops.

BenchmarkTools.jl - A benchmarking framework for the Julia language