CUDA.jl
Octavian.jl
CUDA.jl | Octavian.jl | |
---|---|---|
15 | 17 | |
1,133 | 222 | |
1.1% | 0.0% | |
9.5 | 3.9 | |
7 days ago | 26 days ago | |
Julia | Julia | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CUDA.jl
-
Ask HN: Best way to learn GPU programming?
It would also mean learning Julia, but you can write GPU kernels in Julia and then compile for NVidia CUDA, AMD ROCm or IBM oneAPI.
https://juliagpu.org/
I've written CUDA kernels and I knew nothing about it going in.
- What's your main programming language?
-
How is Julia Performance with GPUs (for LLMs)?
See https://juliagpu.org/
-
Yann Lecun: ML would have advanced if other lang had been adopted versus Python
If you look at Julia open source projects you'll see that the projects tend to have a lot more contributors than the Python counterparts, even over smaller time periods. A package for defining statistical distributions has had 202 contributors (https://github.com/JuliaStats/Distributions.jl), etc. Julia Base even has had over 1,300 contributors (https://github.com/JuliaLang/julia) which is quite a lot for a core language, and that's mostly because the majority of the core is in Julia itself.
This is one of the things that was noted quite a bit at this SIAM CSE conference, that Julia development tends to have a lot more code reuse than other ecosystems like Python. For example, the various machine learning libraries like Flux.jl and Lux.jl share a lot of layer intrinsics in NNlib.jl (https://github.com/FluxML/NNlib.jl), the same GPU libraries (https://github.com/JuliaGPU/CUDA.jl), the same automatic differentiation library (https://github.com/FluxML/Zygote.jl), and of course the same JIT compiler (Julia itself). These two libraries are far enough apart that people say "Flux is to PyTorch as Lux is to JAX/flax", but while in the Python world those share almost 0 code or implementation, in the Julia world they share >90% of the core internals but have different higher levels APIs.
If one hasn't participated in this space it's a bit hard to fathom how much code reuse goes on and how that is influenced by the design of multiple dispatch. This is one of the reasons there is so much cohesion in the community since it doesn't matter if one person is an ecologist and the other is a financial engineer, you may both be contributing to the same library like Distances.jl just adding a distance function which is then used in thousands of places. With the Python ecosystem you tend to have a lot more "megapackages", PyTorch, SciPy, etc. where the barrier to entry is generally a lot higher (and sometimes requires handling the build systems, fun times). But in the Julia ecosystem you have a lot of core development happening in "small" but central libraries, like Distances.jl or Distributions.jl, which are simple enough for an undergrad to get productive in a week but is then used everywhere (Distributions.jl for example is used in every statistics package, and definitions of prior distributions for Turing.jl's probabilistic programming language, etc.).
-
C++ is making me depressed / CUDA question
If you just want to do some numerical code that requires linear algebra and GPU, your best bet would be Julia or Python+JAX.
-
Parallélisation distribuée presque triviale d’applications GPU et CPU basées sur des Stencils avec…
GitHub - JuliaGPU/CUDA.jl: CUDA programming in Julia.
- Why Fortran is easy to learn
-
Generic GPU Kernels
Should have (2017) in the title.
Indeed cool to program julia directly on the GPU and Julia on GPU and this has further evolved since then, see https://juliagpu.org/
-
Announcing The Rust CUDA Project; An ecosystem of crates and tools for writing and executing extremely fast GPU code fully in Rust
I'm excited to eventually see something like JuliaGPU with support for multiple backends.
-
[Media] 100% Rust path tracer running on CPU, GPU (CUDA), and OptiX (for denoising) using one of my upcoming projects. There is no C/C++ code at all, the program shares a single rust crate for the core raytracer and uses rust for the viewer and renderer.
That's really cool! Have you looked at CUDA.jl for the Julia language? Maybe you could take some ideas from there. I am pretty sure it does the same thing you do here, and they support any arbitrary code with the limitations that you cannot allocate memory, I/O is disallowed, and badly-typed code(dynamic) will not compile.
Octavian.jl
- Yann Lecun: ML would have advanced if other lang had been adopted versus Python
-
Julia 1.8 has been released
For some examples of people porting existing C++ Fortran libraries to julia, you should check out https://github.com/JuliaLinearAlgebra/Octavian.jl, https://github.com/dgleich/GenericArpack.jl, https://github.com/apache/arrow-julia (just off the top of my head). These are all ports of C++ or Fortran libraries that match (or exceed) performance of the original, and in the case of Arrow.jl is faster, more general, and 10x less code.
-
Why Julia matrix multiplication so slow in this test?
Note that a performance-optimized Julia implementation is on par or even outperform the specialized high-performance BLAS libraries, see https://github.com/JuliaLinearAlgebra/Octavian.jl .
-
Multiple dispatch: Common Lisp vs Julia
If you look at the thread for your first reference, there were a large number of performance improvements suggested that resulted in a 30x speedup when combined. I'm not sure what you're looking at for your second link, but Julia is faster than Lisp in n-body, spectral norm, mandelbrot, pidigits, regex, fasta, k-nucleotide, and reverse compliment benchmarks. (8 out of 10). For Julia going faster than C/Fortran, I would direct you to https://github.com/JuliaLinearAlgebra/Octavian.jl which is a julia program that beats MKL and openblas for matrix multiplication (which is one of the most heavily optimized algorithms in the world).
-
Why Fortran is easy to learn
> But in the end, it's FORTRAN all the way down. Even in Julia.
That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl
-
Show HN: prometeo – a Python-to-C transpiler for high-performance computing
Well IMO it can definitely be rewritten in Julia, and to an easier degree than python since Julia allows hooking into the compiler pipeline at many areas of the stack. It's lispy an built from the ground up for codegen, with libraries like (https://github.com/JuliaSymbolics/Metatheory.jl) that provide high level pattern matching with e-graphs. The question is whether it's worth your time to learn Julia to do so.
You could also do it at the LLVM level: https://github.com/JuliaComputingOSS/llvm-cbe
For interesting takes on that, you can see https://github.com/JuliaLinearAlgebra/Octavian.jl which relies on loopvectorization.jl to do transforms on Julia AST beyond what LLVM does. Because of that, Octavian.jl beats openblas on many linalg benchmarks
-
Python behind the scenes #13: the GIL and its effects on Python multithreading
The initial results are that libraries like LoopVectorization can already generate optimal micro-kernels, and is competitive with MKL (for square matrix-matrix multiplication) up to around size 512. With help on macro-kernel side from Octavian, Julia is able to outperform MKL for sizes up to to 1000 or so (and is about 20% slower for bigger sizes). https://github.com/JuliaLinearAlgebra/Octavian.jl.
-
From Julia to Rust
> The biggest reason is because some function of the high level language is incompatible with the application domain. Like garbage collection in hot or real-time code or proprietary compilers for processors. Julia does not solve these problems.
The presence of garbage collection in julia is not a problem at all for hot, high performance code. There's nothing stopping you from manually managing your memory in julia.
The easiest way would be to just preallocate your buffers and hold onto them so they don't get collected. Octavian.jl is a BLAS library written in julia that's faster than OpenBLAS and MKL for small matrices and saturates to the same speed for very large matrices [1]. These are some of the hottest loops possible!
For true, hard-real time, yes julia is not a good choice but it's perfectly fine for soft realtime.
[1] https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/24#...
-
Julia 1.6 addresses latency issues
If you want performance benchmarks vs Fortran, https://benchmarks.sciml.ai/html/MultiLanguage/wrapper_packa... has benchmarks with Julia out-performing highly optimized Fortran DiffEq solvers, and https://github.com/JuliaLinearAlgebra/Octavian.jl shows that pure Julia BLAS implementations can compete with MKL and openBLAS, which are among the most heavily optimized pieces of code ever written. Furthermore, Julia has been used on some of the world's fastest super-computers (in the performance critical bits), which as far as I know isn't true of Swift/Kotlin/C#.
Expressiveness is hard to judge objectively, but in my opinion at least, Multiple Dispatch is a massive win for writing composable, re-usable code, and there really isn't anything that compares on that front to Julia.
- Octavian.jl – BLAS-like Julia procedures for CPU
What are some alternatives?
LoopVectorization.jl - Macro(s) for vectorizing loops.
OpenBLAS - OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
cunumeric - An Aspiring Drop-In Replacement for NumPy at Scale
Symbolics.jl - Symbolic programming for the next generation of numerical software
awesome-quant - A curated list of insanely awesome libraries, packages and resources for Quants (Quantitative Finance)
owl - Owl - OCaml Scientific Computing @ https://ocaml.xyz
cudf - cuDF - GPU DataFrame Library
Verilog.jl - Verilog for Julia
Tullio.jl - ⅀
Automa.jl - A julia code generator for regular expressions
GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.
StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)