Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 6 Julia neural-ode Projects
-
DiffEqFlux.jl
Pre-built implicit layer architectures with O(1) backprop, GPUs, and stiff+non-stiff DE solvers, demonstrating scientific machine learning (SciML) and physics-informed machine learning methods
-
SciMLSensitivity.jl
A component of the DiffEq ecosystem for enabling sensitivity analysis for scientific machine learning (SciML). Optimize-then-discretize, discretize-then-optimize, adjoint methods, and more for ODEs, SDEs, DDEs, DAEs, etc.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
DiffEqBase.jl
The lightweight Base library for shared types and functionality for defining differential equation and scientific machine learning (SciML) problems
-
-
DiffEqGPU.jl
GPU-acceleration routines for DifferentialEquations.jl and the broader SciML scientific machine learning ecosystem
Indeed, and this year we created a system for compiling ODE code not just optimized CUDA kernels but also OneAPI kernels, AMD GPU kernels, and Metal. Peer reviewed version is here (https://www.sciencedirect.com/science/article/abs/pii/S00457...), open access is here (https://arxiv.org/abs/2304.06835), and the open source code is at https://github.com/SciML/DiffEqGPU.jl. The key that the paper describes is that in this case kernel generation is about 20x-100x faster than PyTorch and Jax (see the Jax compilation in multiple ways in this notebook https://colab.research.google.com/drive/1d7G-O5JX31lHbg7jTzz..., extra overhead though from calling Julia from Python but still shows a 10x).
The point really is that while deep learning libraries are amazing, at the end of the day they are DSL and really pull towards one specific way of computing and parallelization. It turns out that way of parallelizing is good for deep learning, but not for all things you may want to accelerate. Sometimes (i.e. cases that aren't dominated by large linear algebra) building problem-specific kernels is a major win, and it's over-extrapolating to see ML frameworks do well with GPUs and think that's the only thing that's required. There are many ways to parallelize a code, ML libraries hardcode a very specific way, and it's good for what they are used for but not every problem that can arise.
-
Julia neural-ode related posts
- Machine learning and black box numerical solver[D]
- Accurate and Efficient Physics-Informed Learning Through Differentiable Simulation - Chris Rackauckas (ASA Statistical Computing & Graphics Sections)
- Why Fortran is easy to learn
- [R] New directions in Neural Differential Equations
- JuliaSim - Simulating Reality (new product by Julia Computing)
- Rust vs Fortran
- Odd Behavior: Neural network hybrid differential equation example
-
A note from our sponsor - InfluxDB
www.influxdata.com | 28 Mar 2024
Index
What are some of the best open-source neural-ode projects in Julia? This list will help you:
Project | Stars | |
---|---|---|
1 | DiffEqFlux.jl | 828 |
2 | SciMLSensitivity.jl | 305 |
3 | DiffEqBase.jl | 291 |
4 | ComponentArrays.jl | 272 |
5 | DiffEqGPU.jl | 268 |
6 | BoundaryValueDiffEq.jl | 39 |