Julia neural-ode

Top 6 Julia neural-ode Projects

  • DiffEqFlux.jl

    Pre-built implicit layer architectures with O(1) backprop, GPUs, and stiff+non-stiff DE solvers, demonstrating scientific machine learning (SciML) and physics-informed machine learning methods

  • SciMLSensitivity.jl

    A component of the DiffEq ecosystem for enabling sensitivity analysis for scientific machine learning (SciML). Optimize-then-discretize, discretize-then-optimize, adjoint methods, and more for ODEs, SDEs, DDEs, DAEs, etc.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

  • DiffEqBase.jl

    The lightweight Base library for shared types and functionality for defining differential equation and scientific machine learning (SciML) problems

  • ComponentArrays.jl

    Arrays with arbitrarily nested named components.

  • DiffEqGPU.jl

    GPU-acceleration routines for DifferentialEquations.jl and the broader SciML scientific machine learning ecosystem

    Project mention: 2023 was the year that GPUs stood still | news.ycombinator.com | 2023-12-29

    Indeed, and this year we created a system for compiling ODE code not just optimized CUDA kernels but also OneAPI kernels, AMD GPU kernels, and Metal. Peer reviewed version is here (https://www.sciencedirect.com/science/article/abs/pii/S00457...), open access is here (https://arxiv.org/abs/2304.06835), and the open source code is at https://github.com/SciML/DiffEqGPU.jl. The key that the paper describes is that in this case kernel generation is about 20x-100x faster than PyTorch and Jax (see the Jax compilation in multiple ways in this notebook https://colab.research.google.com/drive/1d7G-O5JX31lHbg7jTzz..., extra overhead though from calling Julia from Python but still shows a 10x).

    The point really is that while deep learning libraries are amazing, at the end of the day they are DSL and really pull towards one specific way of computing and parallelization. It turns out that way of parallelizing is good for deep learning, but not for all things you may want to accelerate. Sometimes (i.e. cases that aren't dominated by large linear algebra) building problem-specific kernels is a major win, and it's over-extrapolating to see ML frameworks do well with GPUs and think that's the only thing that's required. There are many ways to parallelize a code, ML libraries hardcode a very specific way, and it's good for what they are used for but not every problem that can arise.

  • BoundaryValueDiffEq.jl

    Boundary value problem (BVP) solvers for scientific machine learning (SciML)

NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020). The latest post mention was on 2023-12-29.

Julia neural-ode related posts

Index

What are some of the best open-source neural-ode projects in Julia? This list will help you:

Project Stars
1 DiffEqFlux.jl 828
2 SciMLSensitivity.jl 305
3 DiffEqBase.jl 291
4 ComponentArrays.jl 272
5 DiffEqGPU.jl 268
6 BoundaryValueDiffEq.jl 39
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com