-
GPUODEBenchmarks
Comparsion of Julia's GPU Kernel based ODE solvers with other open-source GPU ODE solvers
Uhh they time the vmap of the jit on Jax, basically skipping a ton of optimizations,.esp if there is any linear algebra in there. They also include the cost of building the vmap functional. Not a valid comparison.
https://github.com/utkarsh530/GPUODEBenchmarks/blob/ef807198...
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
DiffEqGPU.jl
GPU-acceleration routines for DifferentialEquations.jl and the broader SciML scientific machine learning ecosystem
Link to GitHub repo from the abstract: https://github.com/SciML/DiffEqGPU.jl
-
Submitters: "Please use the original title, unless it is misleading or linkbait; don't editorialize."
If you want to say what you think is important about an article, that's fine, but do it by adding a comment to the thread. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
(Submitted title was "Julia GPU-based ODE solver 20x-100x faster than those in Jax and PyTorch". We've changed that to a shortened version of the paper title, to fit HN's 80 char limit.)
-
jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
On your last point, as long as you jit the topmost level, it doesn't matter whether or not you have inner jitted functions. The end result should be the same.
Source: https://github.com/google/jax/discussions/5199#discussioncom...
Related posts
-
2023 was the year that GPUs stood still
-
Suggest me a Good library for scientific computing in Julia with good support for multi-core CPUs and GPUs.
-
An Introduction to Neural Ordinary Differential Equations [pdf]
-
Bridging numerical relativity and automatic differentiation using JAX
-
JetStream: Throughput+memory optimized engine for LLM inference on XLA devices