RecursiveFactorization VS KiteSimulators.jl

Compare RecursiveFactorization vs KiteSimulators.jl and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
RecursiveFactorization KiteSimulators.jl
3 1
- 16
- -
- 8.8
- 3 days ago
Julia
- MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

RecursiveFactorization

Posts with mentions or reviews of RecursiveFactorization. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-01.
  • Can Fortran survive another 15 years?
    7 projects | news.ycombinator.com | 1 May 2023
    What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.

    > If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.

    It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....

    > you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations

    You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?

    I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).

    If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.

  • Yann Lecun: ML would have advanced if other lang had been adopted versus Python
    9 projects | news.ycombinator.com | 22 Feb 2023
  • Small Neural networks in Julia 5x faster than PyTorch
    8 projects | news.ycombinator.com | 14 Apr 2022
    Ask them to download Julia and try it, and file an issue if it is not fast enough. We try to have the latest available.

    See for example: https://github.com/JuliaLinearAlgebra/RecursiveFactorization...

KiteSimulators.jl

Posts with mentions or reviews of KiteSimulators.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-01.
  • Can Fortran survive another 15 years?
    7 projects | news.ycombinator.com | 1 May 2023
    Sure you can keep moving goal posts. Of course it doesn't make sense to bind a C production code to a C package (SUNDIALS) through Julia. But if you're asking who is using Julia bindings to SUNDIALS as part of a real case, one case that comes to mind is the Sienna power systems dynamics stuff out of NREL (https://www.nrel.gov/analysis/sienna.html). If you look inside of the dynamics part of Sienna you can clearly see IDA being used (https://github.com/NREL-Sienna/PowerSimulationsDynamics.jl). IIRC at a recent Julia meetup in the Benelux region kite model simulations also used it for the same reasons (https://github.com/aenarete/KiteSimulators.jl) which of course is pointing to the open source code organization for Aenarete (http://aenarete.eu/).

    The way to find other use cases is to look through the citations. Generally there will be a pattern to it. For cases which reduce to (mass matrix) ODEs FBDF generally (but not always) outperforms CVODE's BDF these days, so those cases have mostly converted over. This includes not just ODEs but also other DAEs which are defined through ModelingToolkit, as the index reduction process generates ODEs and generally the ODE form ends up more efficient than using the original DAE form (though not always of course). It's in the fully implicit DAE form that the documentation (as of May 1st 2023) recommends using Sundials' IDA as the most efficient method for that case (https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/) (yes, the docs recommend non-Julia solvers when appropriate. There's more than a few of such recommendations in the documentation). Power systems is such a case with Index-1 DAEs written in the fully implicit form which are difficult in many instances to write in mass matrix form and not already written in ModelingToolkit, hence its use of IDA here. By the same reasoning you can also search around in the citations for other use cases of IDA.

What are some alternatives?

When comparing RecursiveFactorization and KiteSimulators.jl you can also consider the following projects:

tiny-cuda-nn - Lightning fast C++/CUDA neural network framework

SciPyDiffEq.jl - Wrappers for the SciPy differential equation solvers for the SciML Scientific Machine Learning organization

diffrax - Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/

ControlSystems.jl - A Control Systems Toolbox for Julia

vectorflow

PowerSimulationsDynamics.jl - Julia package to run Dynamic Power System simulations. Part of the Scalable Integrated Infrastructure Planning Initiative at the National Renewable Energy Lab.

LeNetTorch - PyTorch implementation of LeNet for fitting MNIST for benchmarking.

ffi-overhead - comparing the c ffi (foreign function interface) overhead on various programming languages

RecursiveFactorization.jl

SciMLBenchmarks.jl - Scientific machine learning (SciML) benchmarks, AI for science, and (differential) equation solvers. Covers Julia, Python (PyTorch, Jax), MATLAB, R