RecursiveFactorization.jl VS ModelingToolkit.jl

Compare RecursiveFactorization.jl vs ModelingToolkit.jl and see what are their differences.

ModelingToolkit.jl

An acausal modeling framework for automatically parallelized scientific machine learning (SciML) in Julia. A computer algebra system for integrated symbolics for physics-informed machine learning and automated transformations of differential equations (by SciML)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
RecursiveFactorization.jl ModelingToolkit.jl
8 15
74 1,338
- 0.9%
6.1 9.8
12 days ago 3 days ago
Julia Julia
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

RecursiveFactorization.jl

Posts with mentions or reviews of RecursiveFactorization.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-01.
  • Can Fortran survive another 15 years?
    7 projects | news.ycombinator.com | 1 May 2023
    What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.

    > If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.

    It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....

    > you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations

    You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?

    I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).

    If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.

  • Yann Lecun: ML would have advanced if other lang had been adopted versus Python
    9 projects | news.ycombinator.com | 22 Feb 2023
  • Small Neural networks in Julia 5x faster than PyTorch
    8 projects | news.ycombinator.com | 14 Apr 2022
    Ask them to download Julia and try it, and file an issue if it is not fast enough. We try to have the latest available.

    See for example: https://github.com/JuliaLinearAlgebra/RecursiveFactorization...

  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    Julia defaults to OpenBLAS but libblastrampoline makes it so that `using MKL` flips it to MKL on the fly. See the JuliaCon video for more details on that (https://www.youtube.com/watch?v=t6hptekOR7s). The recursive comparison is against OpenBLAS/LAPACK and MKL, see this PR for some (older) details: https://github.com/YingboMa/RecursiveFactorization.jl/pull/2... . What it really comes down to in the end is that OpenBLAS is rather bad, and MKL is optimized for Intel CPUs but not for AMD CPUs, so when the best CPUs are now all AMD CPUs, having a new set of BLAS tools and mixing that with recursive LAPACK tools is either as good or better on most modern systems. Then we see this in practice even when we build BLAS into Sundials for 1,000 ODE chemical reaction networks (https://benchmarks.sciml.ai/html/Bio/BCR.html).
  • Julia 1.7 has been released
    15 projects | news.ycombinator.com | 30 Nov 2021
    >I hope those benchmarks are coming in hot

    M1 is extremely good for PDEs because of its large cache lines.

    https://github.com/SciML/DiffEqOperators.jl/issues/407#issue...

    The JuliaSIMD tools which are internally used for BLAS instead of OpenBLAS and MKL (because they tend to outperform standard BLAS's for the operations we use https://github.com/YingboMa/RecursiveFactorization.jl/pull/2...) also generate good code for M1, so that was giving us some powerful use cases right off the bat even before the heroics allowed C/Fortran compilers to fully work on M1.

  • Why I Use Nim instead of Python for Data Processing
    12 projects | news.ycombinator.com | 23 Sep 2021
    Not necessarily true with Julia. Many libraries like DifferentialEquations.jl are Julia all of the way down because the pure Julia BLAS tools outperform OpenBLAS and MKL in certain areas. For example see:

    https://github.com/YingboMa/RecursiveFactorization.jl/pull/2...

    So a stiff ODE solve is pure Julia, LU-factorizations and all.

  • Julia Receives DARPA Award to Accelerate Electronics Simulation by 1,000x
    7 projects | news.ycombinator.com | 11 Mar 2021
    Also, the major point is that BLAS has little to no role played here. Algorithms which just hit BLAS are very suboptimal already. There's a tearing step which reduces the problem to many subproblems which is then more optimally handled by pure Julia numerical linear algebra libraries which greatly outperform OpenBLAS in the regime they are in:

    https://github.com/YingboMa/RecursiveFactorization.jl#perfor...

    And there are hooks in the differential equation solvers to not use OpenBLAS in many cases for this reason:

    https://github.com/SciML/DiffEqBase.jl/blob/master/src/linea...

    Instead what this comes out to is more of a deconstructed KLU, except instead of parsing to a single sparse linear solve you can do semi-independent nonlinear solves which are then spawning parallel jobs of small semi-dense linear solves which are handled by these pure Julia linear algebra libraries.

    And that's only a small fraction of the details. But at the end of the day, if someone is thinking "BLAS", they are already about an order of magnitude behind on speed. The algorithms to do this effectively are much more complex than that.

ModelingToolkit.jl

Posts with mentions or reviews of ModelingToolkit.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-29.
  • Mathematically Modelling a PRV
    1 project | /r/ControlTheory | 24 Oct 2022
    I'd use a modeling tool like https://mtk.sciml.ai/dev/ Using the standard library, you wouldn't need to come up with all equations yourself. Depending on the details of your use case, system identification as suggested before might be a faster approach though.
  • Simulating a simple circuit with the ModelingToolkit
    2 projects | /r/Julia | 29 Jun 2022
  • “Why I still recommend Julia”
    11 projects | news.ycombinator.com | 25 Jun 2022
    No, you do get type errors during runtime. The most common one is a MethodNotFound error, which corresponds to a dispatch not being found. This is the one that people then complain about for long stacktraces and as being hard to read (and that's a valid criticism). The reason for it is because if you do xy with a type combination that does not have a corresponding dispatch, i.e. (x::T1,y::T2) not defined anywhere, then it looks through the method table of the function, does not find one, and throws this MethodNotFound error. You will only get no error if a method is found. Now what can happen is that you can have a method to an abstract type, *(x::T1,y::AbstractArray), but `y` does not "actually" act like an AbstractArray in some way. If the way that it's "not an AbstractArray" is that it's missing some method overloads of the AbstractArray interface (https://docs.julialang.org/en/v1/manual/interfaces/#man-inte...), you will get a MethodNotFound error thrown on that interface function. Thus you will only not get an error if someone has declared `typeof(y) <: AbstractArray` and implemented the AbstractArray interface.

    However, what Yuri pointed out is that there are some packages (specifically in the statistics area) which implemented functions like `f(A::AbstractArray)` but used `for i in 1:length(A)` to iterate through x's values. Notice that the AbstractArray interface has interface functions for "non-traditional indices", including `axes(A)` which is a function to call to get "the a tuple of AbstractUnitRange{<:Integer} of valid indices". Thus these codes are incorrect, because by the definition of the interface you should be doing `for i in axes(A)` if you want to support an AbstractArray because there is no guarantee that its indices go from `1:length(A)`. Note that this was added to the `AbstractArray` interface in the v1.0 change, which is notably after the codes he referenced were written, and thus it's more that they were not updated to handle this expanded interface when the v1.0 transition occurred.

    This is important to understand because the criticisms and proposed "solutions" don't actually match the case... at all. This is not a case of Julia just letting anything through: someone had to purposefully define these functions for them to exist. And interfaces are not a solution here because there is an interface here, its rules were just not followed. I don't know of an interface system which would actually throw an error if someone does a loop `for i in 1:length(A)` in a code where `A` is then indexed by the element. That analysis is rather difficult at the compiler level because it's non-local: `length(A)` is valid since querying for the length is part of the AbstractArray interface (for good reasons), so then `1:length(A)` is valid since that's just range construction on integers, so the for loop construction itself is valid, and it's only invalid because of some other knowledge about how `A[i]` should work (this look structure could be correct if it's not used to `A[i]` but rather do something like `sum(i)` without indexing). If you want this to throw an error, the only real thing you could do is remove indexing from the AbstractArray interface and solely rely on iteration, which I'm not opposed to (given the relationship to GPUs of course), but etc. you can see the question to solving this is "what is the right interface?" not "are there even interfaces?" (of which the answer is, yes but the errors are thrown at runtime MethodNotFound instead of compile time MethodNotImplemented for undefined things, the latter would be cool for better debugging and stacktraces but isn't a solution).

    This is why the real discussions are not about interfaces as a solution, they don't solve this issue, and even further languages with interfaces also have this issue. It's about tools for helping code style. You probably should just never do `for i in 1:length(A)`, probably you should always do `for i in eachindex(A)` or `for i in axes(A)` because those iteration styles work for `Array` but also work for any `AbstractArray` and thus it's just a safer way to code. That is why there are specific mentions to not do this in style guides (for example, https://github.com/SciML/SciMLStyle#generic-code-is-preferre...), and things like JuliaFormatter automatically flag it as a style break (which would cause CI failures in organizations like SciML which enforce SciML Style formatting as a CI run with Github Actions https://github.com/SciML/ModelingToolkit.jl/blob/v8.14.1/.gi...). There's a call to add linting support for this as well, flagging it any time someone writes this code. If everyone is told to not assume 1-based indexing, formatting CI fails if it is assumed, and the linter underlines every piece of code that does it as red, (along with many other measures, which includes extensive downstream testing, fuzzing against other array types, etc.) then we're at least pretty well guarded against it. And many Julia organizations, like SciML, have these practices in place to guard against it. Yuri's specific discussion is more that JuliaStats does not.

  • ‘Machine Scientists’ Distill the Laws of Physics from Raw Data
    8 projects | news.ycombinator.com | 10 May 2022
    The thing to watch in the space of Simulink/Modelica is https://github.com/SciML/ModelingToolkit.jl . It's an acausal modeling system similar to Modelica (though extended to things like SDEs, PDEs, and nonlinear optimization), and has a standard library (https://github.com/SciML/ModelingToolkitStandardLibrary.jl) similar to the MSL. There's still a lot to do, but it's pretty functional at this point. The two other projects to watch are FunctionalModels.jl (https://github.com/tshort/FunctionalModels.jl, which is the renamed Sims.jl), which is built using ModelingToolkit.jl and puts a more functional interface on it. Then there's Modia.jl (https://github.com/ModiaSim/Modia.jl) which had a complete rewrite not too long ago, and in its new form it's fairly similar to ModelingToolkit.jl and the differences are more in the details. For causal modeling similar to Simulink, there's Causal.jl (https://github.com/zekeriyasari/Causal.jl) which is fairly feature-complete, though I think a lot of people these days are going towards acausal modeling instead so flipping Simulink -> acausal, and in that transition picking up Julia, is what I think is the most likely direction (and given MTK has gotten 40,000 downloads in the last year, I think there's good data backing it up).

    And quick mention to bring it back to the main thread here, the DataDrivenDiffEq symbolic regression API gives back Symbolics.jl/ModelingToolkit.jl objects, meaning that the learned equations can be put directly into the simulation tools or composed with other physical models. We're really trying to marry this process modeling and engineering world with these "newer" AI tools.

  • How do I force it to answer in a decimal format.
    1 project | /r/matlab | 13 Mar 2022
    In this case, yes, this should just be done numerically. But using symbolic transformations to optimize numeric code is also a really neat application of symbolic computing that doesn't get enough attention, imo. [This library](https://github.com/SciML/ModelingToolkit.jl), for example, uses symbolics to do sparsity detection, automatic derivative/gradient/jacobian/hessian calculations, index reduction, etc. to speed up numerical differential equation solving.
  • Julia 1.7 has been released
    15 projects | news.ycombinator.com | 30 Nov 2021
    https://homes.cs.washington.edu/~thickstn/ctpg-project-page/...

    That's all showing the raw iteration count to show that it algorithmically is faster, but the time per iteration is also fast for many reasons showcased in the SciMLBenchmarks routinely outperforming C and Fortran solvers (https://github.com/SciML/SciMLBenchmarks.jl). So it's excelling pretty well, and things like the automated discovery of black hole dynamics are all done using the universal differential equation framework enabled by the SciML tools (see https://arxiv.org/abs/2102.12695 for that application).

    What we are missing however is that, right now these simulations are all writing raw differential equations so we do need a better set of modeling tools. That said, MuJoCo and DiffTaichi are not great physical modeling environments for building real systems, instead we would point to Simulink and Modelica as what are really useful for building real-world systems. So it would be cool if there was a modeling language in Julia which extends that universe and directly does optimal code generation for the Julia solvers... and that's what ModelingToolkit.jl is (https://github.com/SciML/ModelingToolkit.jl). That project is still pretty new, but there's already enough to show some large-scale models outperforming Dymola on examples that require symbolic tearing and index reduction, which is far more than what physical simulation environments used for non-scientific purposes (MuJoCo and DiffTaichi) are able to do. See the workshop for details (https://www.youtube.com/watch?v=HEVOgSLBzWA). And that's just the top level details, there's a whole Julia Computing product called JuliaSim (https://juliacomputing.com/products/juliasim/) which is then being built on these pieces to do things like automatically generate ML-accelerated components and add model building GUIs.

    That said, MuJoCo and DiffTaichi have much better visualizations and animations than MTK. Our focus so far has been on the core routines, making them fast, scalable, stable, and extensive. You'll need to wait for the near future (or build something with Makie) if you want the pretty pictures of the robot to happen automatically. That said, Julia's Makie visualization system has already been shown to be sufficiently powerful for this kind of application (https://nextjournal.com/sdanisch/taking-your-robot-for-a-wal...), so we're excited to see where that will go in the future.

  • [Research] Input Arbitrary PDE -&gt; Output Approximate Solution
    4 projects | /r/MachineLearning | 10 Jul 2021
    PDEs are difficult because you don't have a simple numerical definition over all PDEs because they can be defined by arbitrarily many functions. u' = Laplace u + f? Define f. u' = g(u) * Laplace u + f? Define f and g. Etc. To cover the space of PDEs you have to go symbolic at some point, and make the discretization methods dependent on the symbolic form. This is precisely what the ModelingToolkit.jl ecosystem is doing. One instantiation of a discretizer on this symbolic form is NeuralPDE.jl which takes a symbolic PDESystem and generates an OptimizationProblem for a neural network which represents the solution via a Physics-Informed Neural Network (PINN).
  • Should I switch over completely to Julia from Python for numerical analysis/computing?
    5 projects | /r/Julia | 8 Jul 2021
    There's a very clear momentum for Julia here in this domain of modeling and simulation. With JuliaSim funding an entire modeling and simulation department within Julia Computing dedicated to building out an ecosystem that accelerates this domain and the centralization around the SciML tooling, this is an area where we absolutely have both a manpower and momentum advantage. We're getting many universities (PhD students and professors) involved on the open source side, while building out different commercial tools and GUIs on top of the open numerical core. The modeling and simulation domain itself is soon going to have its own SciMLCon since our developer community has gotten too large to just be a few JuliaCon talks: it needs its own days to fit everyone! Not only that, in many aspects we're not just moving faster but have already passed. Not in every way, there's still some important discussion in controls that needs to happen, but that's what the momentum is for.
  • What should a graduate engineer know about MATLAB?
    2 projects | /r/engineering | 26 Apr 2021
  • I'm considering Rust, Go, or Julia for my next language and I'd like to hear your thoughts on these
    12 projects | /r/rust | 16 Apr 2021
    Julia has great support for modeling, have a look at ModelingToolkit.jl. From the README:

What are some alternatives?

When comparing RecursiveFactorization.jl and ModelingToolkit.jl you can also consider the following projects:

tiny-cuda-nn - Lightning fast C++/CUDA neural network framework

casadi - CasADi is a symbolic framework for numeric optimization implementing automatic differentiation in forward and reverse modes on sparse matrix-valued computational graphs. It supports self-contained C-code generation and interfaces state-of-the-art codes such as SUNDIALS, IPOPT etc. It can be used from C++, Python or Matlab/Octave.

PrimesResult - The results of the Dave Plummer's Primes Drag Race

DifferentialEquations.jl - Multi-language suite for high-performance solvers of differential equations and scientific machine learning (SciML) components. Ordinary differential equations (ODEs), stochastic differential equations (SDEs), delay differential equations (DDEs), differential-algebraic equations (DAEs), and more in Julia.

SciMLBenchmarks.jl - Scientific machine learning (SciML) benchmarks, AI for science, and (differential) equation solvers. Covers Julia, Python (PyTorch, Jax), MATLAB, R

dolfinx - Next generation FEniCS problem solving environment

Diffractor.jl - Next-generation AD

NeuralPDE.jl - Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation

svls - SystemVerilog language server

Symbolics.jl - Symbolic programming for the next generation of numerical software

SuiteSparse.jl - Development of SuiteSparse.jl, which ships as part of the Julia standard library.

Gridap.jl - Grid-based approximation of partial differential equations in Julia