DiffEqGPU.jl Alternatives
Similar projects and alternatives to DiffEqGPU.jl
-
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
-
GPUODEBenchmarks
Comparsion of Julia's GPU Kernel based ODE solvers with other open-source GPU ODE solvers
-
DiffEqBase.jl
The lightweight Base library for shared types and functionality for defining differential equation and scientific machine learning (SciML) problems
-
SciMLSensitivity.jl
A component of the DiffEq ecosystem for enabling sensitivity analysis for scientific machine learning (SciML). Optimize-then-discretize, discretize-then-optimize, adjoint methods, and more for ODEs, SDEs, DDEs, DAEs, etc.
-
-
DifferentialEquations.jl
Multi-language suite for high-performance solvers of differential equations and scientific machine learning (SciML) components. Ordinary differential equations (ODEs), stochastic differential equations (SDEs), delay differential equations (DDEs), differential-algebraic equations (DAEs), and more in Julia.
-
Nutrient
Nutrient – The #1 PDF SDK Library, trusted by 10K+ developers. Other PDF SDKs promise a lot - then break. Laggy scrolling, poor mobile UX, tons of bugs, and lack of support cost you endless frustrations. Nutrient’s SDK handles billion-page workloads - so you don’t have to debug PDFs. Used by ~1 billion end users in more than 150 different countries.
DiffEqGPU.jl discussion
DiffEqGPU.jl reviews and mentions
-
2023 was the year that GPUs stood still
Indeed, and this year we created a system for compiling ODE code not just optimized CUDA kernels but also OneAPI kernels, AMD GPU kernels, and Metal. Peer reviewed version is here (https://www.sciencedirect.com/science/article/abs/pii/S00457...), open access is here (https://arxiv.org/abs/2304.06835), and the open source code is at https://github.com/SciML/DiffEqGPU.jl. The key that the paper describes is that in this case kernel generation is about 20x-100x faster than PyTorch and Jax (see the Jax compilation in multiple ways in this notebook https://colab.research.google.com/drive/1d7G-O5JX31lHbg7jTzz..., extra overhead though from calling Julia from Python but still shows a 10x).
The point really is that while deep learning libraries are amazing, at the end of the day they are DSL and really pull towards one specific way of computing and parallelization. It turns out that way of parallelizing is good for deep learning, but not for all things you may want to accelerate. Sometimes (i.e. cases that aren't dominated by large linear algebra) building problem-specific kernels is a major win, and it's over-extrapolating to see ML frameworks do well with GPUs and think that's the only thing that's required. There are many ways to parallelize a code, ML libraries hardcode a very specific way, and it's good for what they are used for but not every problem that can arise.
-
Julia GPU-based ODE solver 20x-100x faster than those in Jax and PyTorch
Link to GitHub repo from the abstract: https://github.com/SciML/DiffEqGPU.jl
Stats
SciML/DiffEqGPU.jl is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of DiffEqGPU.jl is Julia.