Octavian.jl VS RecursiveFactorization.jl

Compare Octavian.jl vs RecursiveFactorization.jl and see what are their differences.

Octavian.jl

Multi-threaded BLAS-like library that provides pure Julia matrix multiplication (by JuliaLinearAlgebra)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Octavian.jl RecursiveFactorization.jl
17 8
222 74
0.0% -
3.9 6.1
24 days ago 6 days ago
Julia Julia
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Octavian.jl

Posts with mentions or reviews of Octavian.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-22.
  • Yann Lecun: ML would have advanced if other lang had been adopted versus Python
    9 projects | news.ycombinator.com | 22 Feb 2023
  • Julia 1.8 has been released
    8 projects | news.ycombinator.com | 18 Aug 2022
    For some examples of people porting existing C++ Fortran libraries to julia, you should check out https://github.com/JuliaLinearAlgebra/Octavian.jl, https://github.com/dgleich/GenericArpack.jl, https://github.com/apache/arrow-julia (just off the top of my head). These are all ports of C++ or Fortran libraries that match (or exceed) performance of the original, and in the case of Arrow.jl is faster, more general, and 10x less code.
  • Why Julia matrix multiplication so slow in this test?
    2 projects | /r/Julia | 31 May 2022
    Note that a performance-optimized Julia implementation is on par or even outperform the specialized high-performance BLAS libraries, see https://github.com/JuliaLinearAlgebra/Octavian.jl .
  • Multiple dispatch: Common Lisp vs Julia
    4 projects | /r/Julia | 5 Mar 2022
    If you look at the thread for your first reference, there were a large number of performance improvements suggested that resulted in a 30x speedup when combined. I'm not sure what you're looking at for your second link, but Julia is faster than Lisp in n-body, spectral norm, mandelbrot, pidigits, regex, fasta, k-nucleotide, and reverse compliment benchmarks. (8 out of 10). For Julia going faster than C/Fortran, I would direct you to https://github.com/JuliaLinearAlgebra/Octavian.jl which is a julia program that beats MKL and openblas for matrix multiplication (which is one of the most heavily optimized algorithms in the world).
  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    > But in the end, it's FORTRAN all the way down. Even in Julia.

    That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl

  • Show HN: prometeo – a Python-to-C transpiler for high-performance computing
    19 projects | news.ycombinator.com | 17 Nov 2021
    Well IMO it can definitely be rewritten in Julia, and to an easier degree than python since Julia allows hooking into the compiler pipeline at many areas of the stack. It's lispy an built from the ground up for codegen, with libraries like (https://github.com/JuliaSymbolics/Metatheory.jl) that provide high level pattern matching with e-graphs. The question is whether it's worth your time to learn Julia to do so.

    You could also do it at the LLVM level: https://github.com/JuliaComputingOSS/llvm-cbe

    For interesting takes on that, you can see https://github.com/JuliaLinearAlgebra/Octavian.jl which relies on loopvectorization.jl to do transforms on Julia AST beyond what LLVM does. Because of that, Octavian.jl beats openblas on many linalg benchmarks

  • Python behind the scenes #13: the GIL and its effects on Python multithreading
    2 projects | news.ycombinator.com | 29 Sep 2021
    The initial results are that libraries like LoopVectorization can already generate optimal micro-kernels, and is competitive with MKL (for square matrix-matrix multiplication) up to around size 512. With help on macro-kernel side from Octavian, Julia is able to outperform MKL for sizes up to to 1000 or so (and is about 20% slower for bigger sizes). https://github.com/JuliaLinearAlgebra/Octavian.jl.
  • From Julia to Rust
    14 projects | news.ycombinator.com | 5 Jun 2021
    > The biggest reason is because some function of the high level language is incompatible with the application domain. Like garbage collection in hot or real-time code or proprietary compilers for processors. Julia does not solve these problems.

    The presence of garbage collection in julia is not a problem at all for hot, high performance code. There's nothing stopping you from manually managing your memory in julia.

    The easiest way would be to just preallocate your buffers and hold onto them so they don't get collected. Octavian.jl is a BLAS library written in julia that's faster than OpenBLAS and MKL for small matrices and saturates to the same speed for very large matrices [1]. These are some of the hottest loops possible!

    For true, hard-real time, yes julia is not a good choice but it's perfectly fine for soft realtime.

    [1] https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/24#...

  • Julia 1.6 addresses latency issues
    5 projects | news.ycombinator.com | 25 May 2021
    If you want performance benchmarks vs Fortran, https://benchmarks.sciml.ai/html/MultiLanguage/wrapper_packa... has benchmarks with Julia out-performing highly optimized Fortran DiffEq solvers, and https://github.com/JuliaLinearAlgebra/Octavian.jl shows that pure Julia BLAS implementations can compete with MKL and openBLAS, which are among the most heavily optimized pieces of code ever written. Furthermore, Julia has been used on some of the world's fastest super-computers (in the performance critical bits), which as far as I know isn't true of Swift/Kotlin/C#.

    Expressiveness is hard to judge objectively, but in my opinion at least, Multiple Dispatch is a massive win for writing composable, re-usable code, and there really isn't anything that compares on that front to Julia.

  • Octavian.jl – BLAS-like Julia procedures for CPU
    1 project | news.ycombinator.com | 23 May 2021

RecursiveFactorization.jl

Posts with mentions or reviews of RecursiveFactorization.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-01.
  • Can Fortran survive another 15 years?
    7 projects | news.ycombinator.com | 1 May 2023
    What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.

    > If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.

    It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....

    > you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations

    You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?

    I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).

    If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.

  • Yann Lecun: ML would have advanced if other lang had been adopted versus Python
    9 projects | news.ycombinator.com | 22 Feb 2023
  • Small Neural networks in Julia 5x faster than PyTorch
    8 projects | news.ycombinator.com | 14 Apr 2022
    Ask them to download Julia and try it, and file an issue if it is not fast enough. We try to have the latest available.

    See for example: https://github.com/JuliaLinearAlgebra/RecursiveFactorization...

  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    Julia defaults to OpenBLAS but libblastrampoline makes it so that `using MKL` flips it to MKL on the fly. See the JuliaCon video for more details on that (https://www.youtube.com/watch?v=t6hptekOR7s). The recursive comparison is against OpenBLAS/LAPACK and MKL, see this PR for some (older) details: https://github.com/YingboMa/RecursiveFactorization.jl/pull/2... . What it really comes down to in the end is that OpenBLAS is rather bad, and MKL is optimized for Intel CPUs but not for AMD CPUs, so when the best CPUs are now all AMD CPUs, having a new set of BLAS tools and mixing that with recursive LAPACK tools is either as good or better on most modern systems. Then we see this in practice even when we build BLAS into Sundials for 1,000 ODE chemical reaction networks (https://benchmarks.sciml.ai/html/Bio/BCR.html).
  • Julia 1.7 has been released
    15 projects | news.ycombinator.com | 30 Nov 2021
    >I hope those benchmarks are coming in hot

    M1 is extremely good for PDEs because of its large cache lines.

    https://github.com/SciML/DiffEqOperators.jl/issues/407#issue...

    The JuliaSIMD tools which are internally used for BLAS instead of OpenBLAS and MKL (because they tend to outperform standard BLAS's for the operations we use https://github.com/YingboMa/RecursiveFactorization.jl/pull/2...) also generate good code for M1, so that was giving us some powerful use cases right off the bat even before the heroics allowed C/Fortran compilers to fully work on M1.

  • Why I Use Nim instead of Python for Data Processing
    12 projects | news.ycombinator.com | 23 Sep 2021
    Not necessarily true with Julia. Many libraries like DifferentialEquations.jl are Julia all of the way down because the pure Julia BLAS tools outperform OpenBLAS and MKL in certain areas. For example see:

    https://github.com/YingboMa/RecursiveFactorization.jl/pull/2...

    So a stiff ODE solve is pure Julia, LU-factorizations and all.

  • Julia Receives DARPA Award to Accelerate Electronics Simulation by 1,000x
    7 projects | news.ycombinator.com | 11 Mar 2021
    Also, the major point is that BLAS has little to no role played here. Algorithms which just hit BLAS are very suboptimal already. There's a tearing step which reduces the problem to many subproblems which is then more optimally handled by pure Julia numerical linear algebra libraries which greatly outperform OpenBLAS in the regime they are in:

    https://github.com/YingboMa/RecursiveFactorization.jl#perfor...

    And there are hooks in the differential equation solvers to not use OpenBLAS in many cases for this reason:

    https://github.com/SciML/DiffEqBase.jl/blob/master/src/linea...

    Instead what this comes out to is more of a deconstructed KLU, except instead of parsing to a single sparse linear solve you can do semi-independent nonlinear solves which are then spawning parallel jobs of small semi-dense linear solves which are handled by these pure Julia linear algebra libraries.

    And that's only a small fraction of the details. But at the end of the day, if someone is thinking "BLAS", they are already about an order of magnitude behind on speed. The algorithms to do this effectively are much more complex than that.

What are some alternatives?

When comparing Octavian.jl and RecursiveFactorization.jl you can also consider the following projects:

OpenBLAS - OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.

tiny-cuda-nn - Lightning fast C++/CUDA neural network framework

Symbolics.jl - Symbolic programming for the next generation of numerical software

PrimesResult - The results of the Dave Plummer's Primes Drag Race

owl - Owl - OCaml Scientific Computing @ https://ocaml.xyz

SciMLBenchmarks.jl - Scientific machine learning (SciML) benchmarks, AI for science, and (differential) equation solvers. Covers Julia, Python (PyTorch, Jax), MATLAB, R

Verilog.jl - Verilog for Julia

Diffractor.jl - Next-generation AD

Automa.jl - A julia code generator for regular expressions

svls - SystemVerilog language server

StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)

SuiteSparse.jl - Development of SuiteSparse.jl, which ships as part of the Julia standard library.