SciMLBenchmarks.jl VS 18337

Compare SciMLBenchmarks.jl vs 18337 and see what are their differences.

18337

18.337 - Parallel Computing and Scientific Machine Learning (by mitmath)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
SciMLBenchmarks.jl 18337
10 14
292 189
1.0% 3.2%
9.7 5.7
8 days ago about 1 year ago
MATLAB Jupyter Notebook
MIT License -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

SciMLBenchmarks.jl

Posts with mentions or reviews of SciMLBenchmarks.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-01.
  • Can Fortran survive another 15 years?
    7 projects | news.ycombinator.com | 1 May 2023
    What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.

    > If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.

    It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....

    > you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations

    You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?

    I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).

    If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.

  • Why Fortran is a scientific powerhouse
    2 projects | news.ycombinator.com | 11 Jan 2023
    Project.toml or Manifest.toml? Every package has Project.toml which specifies bounds (https://github.com/SciML/OrdinaryDiffEq.jl/blob/master/Proje...). Every fully reproducible project has a Manifest that decrease the complete package state (https://github.com/SciML/SciMLBenchmarks.jl/blob/master/benc...).
  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    > But in the end, it's FORTRAN all the way down. Even in Julia.

    That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl

  • Twitter Thread: Symbolic Computing for Compiler Optimizations in Julia
    3 projects | /r/Julia | 3 Jan 2022
    Anything that continues to improve the SciMLBenchmarks of differential equation solvers, inverse problems, scientific machine learning, and equation discovery really. But there's a lot of other applications in mind, like generating compiler passes that improve floating point roundoff (like Herbie), a pure-Julia simple implementation of XLA-transformations for BLAS fusion, and a few others that are a bit more out there and will require a paper to describe the connection.
  • In 2022, the difference between symbolic computing and compiler optimizations will be erased in #julialang. Anyone who can come up with a set of symbolic mathematical rules will automatically receive an optimized compiler pass to build better code
    3 projects | /r/programmingcirclejerk | 2 Jan 2022
    Show me a single DAE solver in Haskell that has even come close to the performance we get in the Julia SciMLBenchmarks. Here's just one example. For Haskell pacakages, all I see are wrappers to GSL and Sundials, both of which are slow in comparison. So this is a 8.5x speedup over something that was already faster than what you could find in Haskell. Show me something with decent speed in DAEs or it's useless.
  • Tutorials for Learning Runge-Kutta Methods with Julia?
    5 projects | /r/Julia | 27 Dec 2021
    That's both a joke and a truth. The DifferentialEquations.jl source code, along with the SciMLBenchmarks and all of the associated documentation, is by far the most complete resource on all of this stuff at this point, for a reason. I've always treated it as "a lab notebook for the community" which is why that 8,000 lines of tableau code, the thousands of convergence tests, etc. are there. Papers have typos sometimes, things change with benchmarks over time, etc. But well-tested code tells you whether something actually converges and what the true performance is today.
  • [D] How important is Numerical Analysis for machine learning?
    2 projects | /r/MachineLearning | 23 Dec 2021
    Star-P was sold off to Microsoft IIRC. Some of the people who had interned there then joined Alan's lab. They created the Julia programming language where now parallelism and performance is directly built into the language. I created the differential equation solver libraries for the language which then used all of these properties to benchmark very well, and that's how I subsequently started working with Alan. Then we took this to build systems that combine machine learning and numerical solvers to accelerate and automatically discover physical systems, and the resulting SciML organization and the scientific machine learning research, along with compiler-level automatic differentiation and parallelism, is where all of that is today with the Julia Lab.
  • Julia 1.7 has been released
    15 projects | news.ycombinator.com | 30 Nov 2021
    https://homes.cs.washington.edu/~thickstn/ctpg-project-page/...

    That's all showing the raw iteration count to show that it algorithmically is faster, but the time per iteration is also fast for many reasons showcased in the SciMLBenchmarks routinely outperforming C and Fortran solvers (https://github.com/SciML/SciMLBenchmarks.jl). So it's excelling pretty well, and things like the automated discovery of black hole dynamics are all done using the universal differential equation framework enabled by the SciML tools (see https://arxiv.org/abs/2102.12695 for that application).

    What we are missing however is that, right now these simulations are all writing raw differential equations so we do need a better set of modeling tools. That said, MuJoCo and DiffTaichi are not great physical modeling environments for building real systems, instead we would point to Simulink and Modelica as what are really useful for building real-world systems. So it would be cool if there was a modeling language in Julia which extends that universe and directly does optimal code generation for the Julia solvers... and that's what ModelingToolkit.jl is (https://github.com/SciML/ModelingToolkit.jl). That project is still pretty new, but there's already enough to show some large-scale models outperforming Dymola on examples that require symbolic tearing and index reduction, which is far more than what physical simulation environments used for non-scientific purposes (MuJoCo and DiffTaichi) are able to do. See the workshop for details (https://www.youtube.com/watch?v=HEVOgSLBzWA). And that's just the top level details, there's a whole Julia Computing product called JuliaSim (https://juliacomputing.com/products/juliasim/) which is then being built on these pieces to do things like automatically generate ML-accelerated components and add model building GUIs.

    That said, MuJoCo and DiffTaichi have much better visualizations and animations than MTK. Our focus so far has been on the core routines, making them fast, scalable, stable, and extensive. You'll need to wait for the near future (or build something with Makie) if you want the pretty pictures of the robot to happen automatically. That said, Julia's Makie visualization system has already been shown to be sufficiently powerful for this kind of application (https://nextjournal.com/sdanisch/taking-your-robot-for-a-wal...), so we're excited to see where that will go in the future.

  • Is Julia suitable for computational physics?
    4 projects | /r/Julia | 5 Jan 2021
    Most of the SciML organization is dedicated to research and production level scientific computing for domains like physical systems, chemical reactions, and systems biology (and more of course). The differential equation benchmarks are quite good in comparison to a lot of C++ and Fortran libraries, there's modern neural PDE solvers, pervasive automatic differentiation, automated GPU and distributed parallelism, SDE solvers, DDE solvers, DAE solvers, ModelingToolkit.jl for Modelica-like symbolic transformations for higher index DAEs, Bayesian differential equations, etc. All of that then ties into big PDE solving. You get the picture.

18337

Posts with mentions or reviews of 18337. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-01-31.
  • Hello I wanted to know what would be the best way to get started in Julia and artificial intelligence. I looked around alot of different languages and saw Julia was good for data science and for artificial intelligence but would like to know what would be good ways to just do it. Thank you
    1 project | /r/Julia | 13 Mar 2022
  • SciML/SciMLBook: Parallel Computing and Scientific Machine Learning (SciML): Methods and Applications (MIT 18.337J/6.338J)
    4 projects | /r/Julia | 31 Jan 2022
    This was previously the https://github.com/mitmath/18337 course website, but now in a new iteration of the course it is being reset. To avoid issues like this in the future, we have moved the "book" out to its own repository, https://github.com/SciML/SciMLBook, where it can continue to grow and be hosted separately from the structure of a course. This means it can be something other courses can depend on as well. I am looking for web developers who can help build a nicer webpage for this book, and also for the SciMLBenchmarks.
  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    I would say Fortran is a pretty great language for teaching beginners in numerical analysis courses. The only issue I have with it is that, similar to using C+MPI (which is what I first learned with, well after a bit of Java), the students don't tend to learn how to go "higher level". You teach them how to write a three loop matrix-matrix multiplication, but the next thing you should teach is how to use higher level BLAS tools and why that will outperform the 3-loop form. But Fortran then becomes very cumbersome (`dgemm` etc.) so students continue to write simple loops and simple algorithms where they shouldn't. A first numerical analysis course should teach simple algorithms AND why the simple algorithms are not good, but a lot of instructors and tools fail to emphasize the second part of that statement.

    On the other hand, the performance + high level nature of Julia makes it a rather excellent tool for this. In MIT graduate course 18.337 Parallel Computing and Scientific Machine Learning (https://github.com/mitmath/18337) we do precisely that, starting with direct optimization of loops, then moving to linear algebra, ODE solving, and implementing automatic differentiation. I don't think anyone would want to give a homework assignment to implement AD in Fortran, but in Julia you can do that as something shortly after looking at loop performance and SIMD, and that's really something special. Steven Johnson's 18.335 graduate course in Numerical Analysis (https://github.com/mitmath/18335) showcases some similar niceties. I really like this demonstration where it starts from scratch with the 3 loops and shows how SIMD and cache-oblivious algorithms build towards BLAS performance, and why most users should ultimately not be writing such loops (https://nbviewer.org/github/mitmath/18335/blob/master/notes/...) and should instead use the built-in `mul!` in most scenarios. There's very few languages where such "start to finish" demonstrations can really be showcased in a nice clear fashion.

  • What are some interesting papers to read?
    2 projects | /r/Julia | 22 Nov 2021
    And why not take a course while you're at it.
  • Composability in Julia: Implementing Deep Equilibrium Models via Neural Odes
    2 projects | news.ycombinator.com | 21 Oct 2021
  • [2109.12449] AbstractDifferentiation.jl: Backend-Agnostic Differentiable Programming in Julia
    1 project | /r/Julia | 28 Sep 2021
  • Is that true?
    6 projects | /r/ProgrammerHumor | 8 Aug 2021
    Here's a good one. It's in Julia but it should do the trick. The main instructor is the most prolific Julia dev in the world.
  • [D] Has anyone worked with Physics Informed Neural Networks (PINNs)?
    3 projects | /r/MachineLearning | 21 May 2021
    NeuralPDE.jl fully automates the approach (and extensions of it, which are required to make it solve practical problems) from symbolic descriptions of PDEs, so that might be a good starting point to both learn the practical applications and get something running in a few minutes. As part of MIT 18.337 Parallel Computing and Scientific Machine Learning I gave an early lecture on physics-informed neural networks (with a two part video) describing the approach, how it works and what its challenges are. You might find those resources enlightening.
  • [P] Machine Learning in Physics?
    1 project | /r/MachineLearning | 13 May 2021
    It's a very thriving field. If you are interested in methods research and want to learn some of the techniques behind it, I would recommend taking a dive into my lecture notes as I taught a graduate course at MIT, 18.337 Parallel Computing and Scientific Machine Learning, specifically designed to get new students onboarded into this research program.
  • MIT 18.337J: Parallel Computing and Scientific Machine Learning
    1 project | news.ycombinator.com | 19 Mar 2021

What are some alternatives?

When comparing SciMLBenchmarks.jl and 18337 you can also consider the following projects:

DifferentialEquations.jl - Multi-language suite for high-performance solvers of differential equations and scientific machine learning (SciML) components. Ordinary differential equations (ODEs), stochastic differential equations (SDEs), delay differential equations (DDEs), differential-algebraic equations (DAEs), and more in Julia.

DataDrivenDiffEq.jl - Data driven modeling and automated discovery of dynamical systems for the SciML Scientific Machine Learning organization

julia - The Julia Programming Language

Vulpix - Fast, unopinionated, minimalist web framework for .NET core inspired by express.js

SciMLTutorials.jl - Tutorials for doing scientific machine learning (SciML) and high-performance differential equation solving with open source software.

NeuralPDE.jl - Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation

ApproxFun.jl - Julia package for function approximation

RecursiveFactorization.jl

GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.

Diffractor.jl - Next-generation AD

BenchmarkTools.jl - A benchmarking framework for the Julia language