DifferentialEquations.jl
CUDA.jl
Our great sponsors
DifferentialEquations.jl | CUDA.jl | |
---|---|---|
6 | 15 | |
2,754 | 1,131 | |
1.5% | 2.8% | |
7.3 | 9.5 | |
18 days ago | 6 days ago | |
Julia | Julia | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DifferentialEquations.jl
-
Startups are building with the Julia Programming Language
This lists some of its unique abilities:
https://docs.sciml.ai/DiffEqDocs/stable/
The routines are sufficiently generic, with regard to Julia’s type system, to allow the solvers to automatically compose with other packages and to seamlessly use types other than Numbers. For example, instead of handling just functions Number→Number, you can define your ODE in terms of quantities with physical dimensions, uncertainties, quaternions, etc., and it will just work (for example, propagating uncertainties correctly to the solution¹). Recent developments involve research into the automated selection of solution routines based on the properties of the ODE, something that seems really next-level to me.
[1] https://lwn.net/Articles/834571/
-
From Common Lisp to Julia
https://github.com/SciML/DifferentialEquations.jl/issues/786. As you could see from the tweet, it's now at 0.1 seconds. That has been within one year.
Also, if you take a look at a tutorial, say the tutorial video from 2018,
-
When is julia getting proper precompilation?
It's not faith, and it's not all from Julia itself. https://github.com/SciML/DifferentialEquations.jl/issues/785 should reduce compile times of what OP mentioned for example.
-
Julia 1.7 has been released
Let's even put raw numbers to it. DifferentialEquations.jl usage has seen compile times drop from 22 seconds to 3 seconds over the last few months.
https://github.com/SciML/DifferentialEquations.jl/issues/786
- Suggest me a Good library for scientific computing in Julia with good support for multi-core CPUs and GPUs.
-
DifferentialEquations compilation issue in Julia 1.6
https://github.com/SciML/DifferentialEquations.jl/issues/737 double posted, with the answer here. Please don't do that.
CUDA.jl
-
Ask HN: Best way to learn GPU programming?
It would also mean learning Julia, but you can write GPU kernels in Julia and then compile for NVidia CUDA, AMD ROCm or IBM oneAPI.
https://juliagpu.org/
I've written CUDA kernels and I knew nothing about it going in.
- What's your main programming language?
-
How is Julia Performance with GPUs (for LLMs)?
See https://juliagpu.org/
-
Yann Lecun: ML would have advanced if other lang had been adopted versus Python
If you look at Julia open source projects you'll see that the projects tend to have a lot more contributors than the Python counterparts, even over smaller time periods. A package for defining statistical distributions has had 202 contributors (https://github.com/JuliaStats/Distributions.jl), etc. Julia Base even has had over 1,300 contributors (https://github.com/JuliaLang/julia) which is quite a lot for a core language, and that's mostly because the majority of the core is in Julia itself.
This is one of the things that was noted quite a bit at this SIAM CSE conference, that Julia development tends to have a lot more code reuse than other ecosystems like Python. For example, the various machine learning libraries like Flux.jl and Lux.jl share a lot of layer intrinsics in NNlib.jl (https://github.com/FluxML/NNlib.jl), the same GPU libraries (https://github.com/JuliaGPU/CUDA.jl), the same automatic differentiation library (https://github.com/FluxML/Zygote.jl), and of course the same JIT compiler (Julia itself). These two libraries are far enough apart that people say "Flux is to PyTorch as Lux is to JAX/flax", but while in the Python world those share almost 0 code or implementation, in the Julia world they share >90% of the core internals but have different higher levels APIs.
If one hasn't participated in this space it's a bit hard to fathom how much code reuse goes on and how that is influenced by the design of multiple dispatch. This is one of the reasons there is so much cohesion in the community since it doesn't matter if one person is an ecologist and the other is a financial engineer, you may both be contributing to the same library like Distances.jl just adding a distance function which is then used in thousands of places. With the Python ecosystem you tend to have a lot more "megapackages", PyTorch, SciPy, etc. where the barrier to entry is generally a lot higher (and sometimes requires handling the build systems, fun times). But in the Julia ecosystem you have a lot of core development happening in "small" but central libraries, like Distances.jl or Distributions.jl, which are simple enough for an undergrad to get productive in a week but is then used everywhere (Distributions.jl for example is used in every statistics package, and definitions of prior distributions for Turing.jl's probabilistic programming language, etc.).
-
C++ is making me depressed / CUDA question
If you just want to do some numerical code that requires linear algebra and GPU, your best bet would be Julia or Python+JAX.
-
Parallélisation distribuée presque triviale d’applications GPU et CPU basées sur des Stencils avec…
GitHub - JuliaGPU/CUDA.jl: CUDA programming in Julia.
- Why Fortran is easy to learn
-
Generic GPU Kernels
Should have (2017) in the title.
Indeed cool to program julia directly on the GPU and Julia on GPU and this has further evolved since then, see https://juliagpu.org/
-
Announcing The Rust CUDA Project; An ecosystem of crates and tools for writing and executing extremely fast GPU code fully in Rust
I'm excited to eventually see something like JuliaGPU with support for multiple backends.
-
[Media] 100% Rust path tracer running on CPU, GPU (CUDA), and OptiX (for denoising) using one of my upcoming projects. There is no C/C++ code at all, the program shares a single rust crate for the core raytracer and uses rust for the viewer and renderer.
That's really cool! Have you looked at CUDA.jl for the Julia language? Maybe you could take some ideas from there. I am pretty sure it does the same thing you do here, and they support any arbitrary code with the limitations that you cannot allocate memory, I/O is disallowed, and badly-typed code(dynamic) will not compile.
What are some alternatives?
ModelingToolkit.jl - An acausal modeling framework for automatically parallelized scientific machine learning (SciML) in Julia. A computer algebra system for integrated symbolics for physics-informed machine learning and automated transformations of differential equations
LoopVectorization.jl - Macro(s) for vectorizing loops.
diffeqpy - Solving differential equations in Python using DifferentialEquations.jl and the SciML Scientific Machine Learning organization
cunumeric - An Aspiring Drop-In Replacement for NumPy at Scale
Gridap.jl - Grid-based approximation of partial differential equations in Julia
awesome-quant - A curated list of insanely awesome libraries, packages and resources for Quants (Quantitative Finance)
ApproxFun.jl - Julia package for function approximation
cudf - cuDF - GPU DataFrame Library
DiffEqBase.jl - The lightweight Base library for shared types and functionality for defining differential equation and scientific machine learning (SciML) problems
Tullio.jl - ⅀
FFTW.jl - Julia bindings to the FFTW library for fast Fourier transforms
GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.