Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
Yeah, we've managed to get Julia itself running pretty well on the M1, there are still a few outstanding issues such as backtraces not being as high-quality as on other platforms. You can see the overall tracking issue [0] for a more granular status on the platform support.
For the package ecosystem as a whole, we will be slowly increasing the number of third-party packages that are built for aarch64-darwin, but this is a major undertaking, so I don't expect it to be truly "finished" for 3-6 months. This is due to both technical issues (packages may not build cleanly on aarch64-darwin and may need some patching/updating especially since some of our compilers like gfortran are prerelease testing builds, building for aarch64-darwin means that the packages must be marked as compatible with Julia 1.6+ only--due to a limitation in Julia 1.5-, etc...) as well as practical (Our packaging team is primarily volunteers and they only have so much bandwidth to help fix compilation issues).
[0] https://github.com/JuliaLang/julia/issues/36617
https://github.com/kostya/benchmarks
Is this because of bad typing or they didn't use Julia properly in idiomatic manner?
Very often benchmarks include compilation time of julia, which might be slow. Sometimes they rightfully do so, but often it's really apples and oranges when benchmarking vs C/C++/Rust/Fortran. Julia 1.6 shows compilation time in the `@time f()` macro, but Julia programmers typically use @btime from the BenchmarkTools package to get better timings (e.g. median runtime over n function calls).
I think it's more interesting to see what people do with the language instead of focusing on microbenchmarks. There's for instance this great package https://github.com/JuliaSIMD/LoopVectorization.jl which exports a simple macro `@avx` which you can stick to loops to vectorize them in ways better than the compiler (=LLVM). It's quite remarkable you can implement this in the language as a package as opposed to having LLVM improve or the julia compiler team figure this out.
See the docs which kinda read like blog posts: https://juliasimd.github.io/LoopVectorization.jl/stable/
This package is 70 lines of Julia code. You can check it out for yourself here: https://github.com/OTDE/GalaxyBrain.jl
I talk about this package in-depth here: https://medium.com/@otde/six-months-with-julia-parse-time-tr...
This is the PR to look at if you want to try and help: https://github.com/tshort/StaticCompiler.jl/pull/46
I think this isn't really a great place for beginners though unfortunately. This project is tightly coupled to undocumented internals of not only julia's compiler, but also LLVM.jl and GPUCompiler.jl. It'll require a lot of learning to be able to meaningfully contribute at this stage.
We call these "system images" and you can generate them with [PackageCompiler](https://github.com/JuliaLang/PackageCompiler.jl). Unfortunately, it's still a little cumbersome to create them, but this is something that we're improving from release to release. One possible future is where an environment can be "baked", such that when you start Julia pointing to that environment (via `--project`) it loads all the packages more or less instantaneously.
The downside is that generating system images can be quite slow, so we're still working on ways to generate them incrementally. In any case, if you're inspired to work on this kind of stuff, it's definitely something the entire community is interested in!
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release