Our great sponsors
-
Then, if everything is pure functions, instead of iteratively mutating an array, you can recursively do calls that make new arrays that change one element at a time. But wait, doesn't that sound very inefficient compared to mutation? Well yes it does! It would satisfy the purity argument to allow a compiler to know how to auto-parallelize, but it would be making so many temporary arrays that it would likely be slower than a good explicit loop. For this reason you need compiler optimizations which would remove the intermediate arrays and transform it under the hood to mutating code (see https://github.com/JuliaLang/julia/pull/42465 as an example of this in the Julia compiler). This is one optimization that is needed, another is the tail-call elimination that I mentioned earlier, etc. If you have all pure functions, and if all of these optimizations are perfect, then you can match the serial code performance of C/Fortran. But that is a big if, which is why you don't see successful BLAS's written in say Haskell (GHC is a good compiler but it's hard to make this perfect).
-
I think that's the issue, but it's more of a context issue. For a lot of the groups using Julia, the cases excluded from this are 99% of what people are doing daily for scientific computing. For reference for those who don't know, a nice Julia library for these kinds of tools is https://github.com/JuliaCollections/FunctionalCollections.jl, and it is used in places where functional styles and pure functions are more widely used like in the symbolic computing libraries.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
But for these to be as fast as say an Array when being used as the object in a differential equation solve or as the underlying construct of a nonlinear optimization, you would need the compiler to elide the struct construction which it doesn't always do. This is why the tools evolved to be around things like https://github.com/jonniedie/ComponentArrays.jl instead, where it's an Array-backed object with a higher level. Such immutable objects are used in these array-like contexts when the problems are small enough (FieldVectors or SLVector LabelledArrays.jl in DiffEq), and such applications work well in Haskell as well, but I haven't seen a compiler do well with say a 1,000 ODE model written in this style. And it's not quite an extreme case if it's what people are doing daily.
Related posts
- Startups are building with the Julia Programming Language
- What Apple hardware do I need for CUDA-based deep learning tasks?
- Yann Lecun: ML would have advanced if other lang had been adopted versus Python
- What's Great about Julia?
- What other programming language do you actively develop with productively, to complement Python?