Our great sponsors
-
GPUODEBenchmarks
Comparsion of Julia's GPU Kernel based ODE solvers with other open-source GPU ODE solvers
-
DiffEqGPU.jl
GPU-acceleration routines for DifferentialEquations.jl and the broader SciML scientific machine learning ecosystem
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Uhh they time the vmap of the jit on Jax, basically skipping a ton of optimizations,.esp if there is any linear algebra in there. They also include the cost of building the vmap functional. Not a valid comparison.
https://github.com/utkarsh530/GPUODEBenchmarks/blob/ef807198...
Link to GitHub repo from the abstract: https://github.com/SciML/DiffEqGPU.jl
Submitters: "Please use the original title, unless it is misleading or linkbait; don't editorialize."
If you want to say what you think is important about an article, that's fine, but do it by adding a comment to the thread. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
(Submitted title was "Julia GPU-based ODE solver 20x-100x faster than those in Jax and PyTorch". We've changed that to a shortened version of the paper title, to fit HN's 80 char limit.)
On your last point, as long as you jit the topmost level, it doesn't matter whether or not you have inner jitted functions. The end result should be the same.
Source: https://github.com/google/jax/discussions/5199#discussioncom...