Why Fortran is easy to learn

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • Octavian.jl

    Multi-threaded BLAS-like library that provides pure Julia matrix multiplication

  • > But in the end, it's FORTRAN all the way down. Even in Julia.

    That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl

  • Julia defaults to OpenBLAS but libblastrampoline makes it so that `using MKL` flips it to MKL on the fly. See the JuliaCon video for more details on that (https://www.youtube.com/watch?v=t6hptekOR7s). The recursive comparison is against OpenBLAS/LAPACK and MKL, see this PR for some (older) details: https://github.com/YingboMa/RecursiveFactorization.jl/pull/2... . What it really comes down to in the end is that OpenBLAS is rather bad, and MKL is optimized for Intel CPUs but not for AMD CPUs, so when the best CPUs are now all AMD CPUs, having a new set of BLAS tools and mixing that with recursive LAPACK tools is either as good or better on most modern systems. Then we see this in practice even when we build BLAS into Sundials for 1,000 ODE chemical reaction networks (https://benchmarks.sciml.ai/html/Bio/BCR.html).

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • julia

    The Julia Programming Language

  • Yeah no worries. You might want to join in the community at least because this kind of discussion is par for the course on the Slack. You might find a nice home here!

    > though its threading implementation may not be robust

    That's one of the main issues I have with it. Its poor threading behavior coupled with a bad default (https://github.com/JuliaLang/julia/issues/33409 one of my biggest pet peeve issues right now) really degrades real user performance.

  • quickjs

    Public repository of the QuickJS Javascript Engine.

  • Ancient LISP and BASIC variants also stand out in this regard of not requiring you to undergo years of training and theory to implement and fully understand.

    Fabrice Bellard's minimalist but complete quickjs.c Javascript interpreter is presently about 54,000 lines of C:

      https://github.com/bellard/quickjs/blob/master/quickjs.c

  • TriangularSolve.jl

    rdiv!(::AbstractMatrix, ::UpperTriangular) and ldiv!(::LowerTriangular, ::AbstractMatrix)

  • > But in the end, it's FORTRAN all the way down. Even in Julia.

    That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl

  • SciMLBenchmarks.jl

    Scientific machine learning (SciML) benchmarks, AI for science, and (differential) equation solvers. Covers Julia, Python (PyTorch, Jax), MATLAB, R

  • > But in the end, it's FORTRAN all the way down. Even in Julia.

    That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl

  • Fortran-code-on-GitHub

    Directory of Fortran codes on GitHub, arranged by topic

  • There's modern stuff being written in astro(nomy/physics) (I can attest to some of the codebases listed in https://github.com/Beliavsky/Fortran-code-on-GitHub#astrophy... being modern, at least in terms of development), but I'd say C++ likely does have the upper hand for newer codebases (unless things have changed dramatically last time I looked, algorithms that don't nicely align with nd-arrays are still painful in Fortran).

    I've also heard rumours of Julia and even Rust being used (the latter because of the ability to reuse libraries in the browser e.g. for visualisation), but the writers of these codebases (and the Fortran/C/C++/Java) are unusual—Python and R (and for some holdouts, IDL) are what are most people write in (even if those languages call something else).

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • 18337

    18.337 - Parallel Computing and Scientific Machine Learning

  • I would say Fortran is a pretty great language for teaching beginners in numerical analysis courses. The only issue I have with it is that, similar to using C+MPI (which is what I first learned with, well after a bit of Java), the students don't tend to learn how to go "higher level". You teach them how to write a three loop matrix-matrix multiplication, but the next thing you should teach is how to use higher level BLAS tools and why that will outperform the 3-loop form. But Fortran then becomes very cumbersome (`dgemm` etc.) so students continue to write simple loops and simple algorithms where they shouldn't. A first numerical analysis course should teach simple algorithms AND why the simple algorithms are not good, but a lot of instructors and tools fail to emphasize the second part of that statement.

    On the other hand, the performance + high level nature of Julia makes it a rather excellent tool for this. In MIT graduate course 18.337 Parallel Computing and Scientific Machine Learning (https://github.com/mitmath/18337) we do precisely that, starting with direct optimization of loops, then moving to linear algebra, ODE solving, and implementing automatic differentiation. I don't think anyone would want to give a homework assignment to implement AD in Fortran, but in Julia you can do that as something shortly after looking at loop performance and SIMD, and that's really something special. Steven Johnson's 18.335 graduate course in Numerical Analysis (https://github.com/mitmath/18335) showcases some similar niceties. I really like this demonstration where it starts from scratch with the 3 loops and shows how SIMD and cache-oblivious algorithms build towards BLAS performance, and why most users should ultimately not be writing such loops (https://nbviewer.org/github/mitmath/18335/blob/master/notes/...) and should instead use the built-in `mul!` in most scenarios. There's very few languages where such "start to finish" demonstrations can really be showcased in a nice clear fashion.

  • 18335

    18.335 - Introduction to Numerical Methods course

  • I would say Fortran is a pretty great language for teaching beginners in numerical analysis courses. The only issue I have with it is that, similar to using C+MPI (which is what I first learned with, well after a bit of Java), the students don't tend to learn how to go "higher level". You teach them how to write a three loop matrix-matrix multiplication, but the next thing you should teach is how to use higher level BLAS tools and why that will outperform the 3-loop form. But Fortran then becomes very cumbersome (`dgemm` etc.) so students continue to write simple loops and simple algorithms where they shouldn't. A first numerical analysis course should teach simple algorithms AND why the simple algorithms are not good, but a lot of instructors and tools fail to emphasize the second part of that statement.

    On the other hand, the performance + high level nature of Julia makes it a rather excellent tool for this. In MIT graduate course 18.337 Parallel Computing and Scientific Machine Learning (https://github.com/mitmath/18337) we do precisely that, starting with direct optimization of loops, then moving to linear algebra, ODE solving, and implementing automatic differentiation. I don't think anyone would want to give a homework assignment to implement AD in Fortran, but in Julia you can do that as something shortly after looking at loop performance and SIMD, and that's really something special. Steven Johnson's 18.335 graduate course in Numerical Analysis (https://github.com/mitmath/18335) showcases some similar niceties. I really like this demonstration where it starts from scratch with the 3 loops and shows how SIMD and cache-oblivious algorithms build towards BLAS performance, and why most users should ultimately not be writing such loops (https://nbviewer.org/github/mitmath/18335/blob/master/notes/...) and should instead use the built-in `mul!` in most scenarios. There's very few languages where such "start to finish" demonstrations can really be showcased in a nice clear fashion.

  • BLIS.jl

    This repo plans to provide a low-level Julia wrapper for BLIS typed interface.

  • It doesn't look like you're measuring factorization performance? OpenBLAS matrix-matrix multiplication is fine, it just falls apart when going to things like Cholesky and LU.

    > not the default, I've now checked

    Whatever the Julia default build is doing, so probably not the recursive LAPACK routines then if that's how it's being built. If there's a better default that's worth an issue.

    > That said, I don't understand why people avoid AMD's BLAS/LAPACK

    There just isn't a BLIS wrapper into Julia right now, and it's much easier to just write new BLAS tools than to build wrappers IMO. It makes it very easy to customize to nonstandard Julia number types too. But I do think that BLIS is a great project and I would like to see it replace OpenBLAS as the default. There's been some discussion to make it as easy as MKL (https://github.com/JuliaLinearAlgebra/BLIS.jl/issues/3).

  • MPI.jl

    MPI wrappers for Julia

  • CUDA.jl

    CUDA programming in Julia.

  • rr

    Record and Replay Framework

  • It depends a bit on what you want, but https://rr-project.org/ works really well with julia

  • GPUCompiler.jl

    Reusable compiler infrastructure for Julia GPU backends.

  • Julia's compiler is made to be extendable. GPUCompiler.jl which adds the .ptx compilation output for example is a package (https://github.com/JuliaGPU/GPUCompiler.jl). The package manager of Julia itself... is an external package (https://github.com/JuliaLang/Pkg.jl). The built in SuiteSparse usage? That's a package too (https://github.com/JuliaLang/SuiteSparse.jl). It's fairly arbitrary what is "external" and "internal" in a language that allows that kind of extendability. Literally the only thing that makes these packages a standard library is that they are built into and shipped with the standard system image. Do you want to make your own distribution of Julia that changes what the "internal" packages are? Here's a tutorial that shows how to add plotting to the system image (https://julialang.github.io/PackageCompiler.jl/dev/examples/...). You could setup a binary server for that and now the first time to plot is 0.4 seconds.

    Julia's arrays system is built so that most arrays that are used are not the simple Base.Array. Instead Julia has an AbstractArray interface definition (https://docs.julialang.org/en/v1/manual/interfaces/#man-inte...) which the Base.Array conforms to, and many effectively standard library packages like StaticArrays.jl, OffsetArrays.jl, etc. conform to, and thus they can be used in any other Julia package, like the differential equation solvers, solving nonlinear systems, optimization libraries, etc. There is a higher chance that packages depend on these packages then that they do not. They are only not part of the Julia distribution because the core idea is to move everything possible out to packages. There's not only a plan to make SuiteSparse and sparse matrix support be a package in 2.0, but also ideas about making the rest of linear algebra and arrays themselves into packages where Julia just defines memory buffer intrinsic (with likely the Arrays.jl package still shipped with the default image). At that point, are arrays not built into the language? I can understand using such a narrow definition for systems like Fortran or C where the standard library is essentially a fixed concept, but that just does not make sense with Julia. It's inherently fuzzy.

  • Pkg.jl

    Pkg - Package manager for the Julia programming language

  • Julia's compiler is made to be extendable. GPUCompiler.jl which adds the .ptx compilation output for example is a package (https://github.com/JuliaGPU/GPUCompiler.jl). The package manager of Julia itself... is an external package (https://github.com/JuliaLang/Pkg.jl). The built in SuiteSparse usage? That's a package too (https://github.com/JuliaLang/SuiteSparse.jl). It's fairly arbitrary what is "external" and "internal" in a language that allows that kind of extendability. Literally the only thing that makes these packages a standard library is that they are built into and shipped with the standard system image. Do you want to make your own distribution of Julia that changes what the "internal" packages are? Here's a tutorial that shows how to add plotting to the system image (https://julialang.github.io/PackageCompiler.jl/dev/examples/...). You could setup a binary server for that and now the first time to plot is 0.4 seconds.

    Julia's arrays system is built so that most arrays that are used are not the simple Base.Array. Instead Julia has an AbstractArray interface definition (https://docs.julialang.org/en/v1/manual/interfaces/#man-inte...) which the Base.Array conforms to, and many effectively standard library packages like StaticArrays.jl, OffsetArrays.jl, etc. conform to, and thus they can be used in any other Julia package, like the differential equation solvers, solving nonlinear systems, optimization libraries, etc. There is a higher chance that packages depend on these packages then that they do not. They are only not part of the Julia distribution because the core idea is to move everything possible out to packages. There's not only a plan to make SuiteSparse and sparse matrix support be a package in 2.0, but also ideas about making the rest of linear algebra and arrays themselves into packages where Julia just defines memory buffer intrinsic (with likely the Arrays.jl package still shipped with the default image). At that point, are arrays not built into the language? I can understand using such a narrow definition for systems like Fortran or C where the standard library is essentially a fixed concept, but that just does not make sense with Julia. It's inherently fuzzy.

  • SuiteSparse.jl

    Discontinued Development of SuiteSparse.jl, which ships as part of the Julia standard library.

  • Julia's compiler is made to be extendable. GPUCompiler.jl which adds the .ptx compilation output for example is a package (https://github.com/JuliaGPU/GPUCompiler.jl). The package manager of Julia itself... is an external package (https://github.com/JuliaLang/Pkg.jl). The built in SuiteSparse usage? That's a package too (https://github.com/JuliaLang/SuiteSparse.jl). It's fairly arbitrary what is "external" and "internal" in a language that allows that kind of extendability. Literally the only thing that makes these packages a standard library is that they are built into and shipped with the standard system image. Do you want to make your own distribution of Julia that changes what the "internal" packages are? Here's a tutorial that shows how to add plotting to the system image (https://julialang.github.io/PackageCompiler.jl/dev/examples/...). You could setup a binary server for that and now the first time to plot is 0.4 seconds.

    Julia's arrays system is built so that most arrays that are used are not the simple Base.Array. Instead Julia has an AbstractArray interface definition (https://docs.julialang.org/en/v1/manual/interfaces/#man-inte...) which the Base.Array conforms to, and many effectively standard library packages like StaticArrays.jl, OffsetArrays.jl, etc. conform to, and thus they can be used in any other Julia package, like the differential equation solvers, solving nonlinear systems, optimization libraries, etc. There is a higher chance that packages depend on these packages then that they do not. They are only not part of the Julia distribution because the core idea is to move everything possible out to packages. There's not only a plan to make SuiteSparse and sparse matrix support be a package in 2.0, but also ideas about making the rest of linear algebra and arrays themselves into packages where Julia just defines memory buffer intrinsic (with likely the Arrays.jl package still shipped with the default image). At that point, are arrays not built into the language? I can understand using such a narrow definition for systems like Fortran or C where the standard library is essentially a fixed concept, but that just does not make sense with Julia. It's inherently fuzzy.

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts