NeuralPDE.jl VS AMDGPU.jl

Compare NeuralPDE.jl vs AMDGPU.jl and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
NeuralPDE.jl AMDGPU.jl
10 6
903 264
2.8% 1.9%
9.7 9.1
3 days ago 5 days ago
Julia Julia
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

NeuralPDE.jl

Posts with mentions or reviews of NeuralPDE.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-05-26.
  • Automatically install huge number of dependency?
    1 project | /r/Julia | 31 May 2023
    The documentation has a manifest associated with it: https://docs.sciml.ai/NeuralPDE/dev/#Reproducibility. Instantiating the manifest will give you all of the exact versions used for the documentation build (https://github.com/SciML/NeuralPDE.jl/blob/gh-pages/v5.7.0/assets/Manifest.toml). You just ]instantiate folder_of_manifest. Or you can use the Project.toml.
  • from Wolfram Mathematica to Julia
    2 projects | /r/Julia | 26 May 2022
    PDE solving libraries are MethodOfLines.jl and NeuralPDE.jl. NeuralPDE is very general but not very fast (it's a limitation of the method, PINNs are just slow). MethodOfLines is still somewhat under development but generates quite fast code.
  • IA et Calcul scientifique dans Kubernetes avec le langage Julia, K8sClusterManagers.jl
    11 projects | dev.to | 12 Mar 2022
    GitHub - SciML/NeuralPDE.jl: Physics-Informed Neural Networks (PINN) and Deep BSDE Solvers of Differential Equations for Scientific Machine Learning (SciML) accelerated simulation
  • [D] ICLR 2022 RESULTS ARE OUT
    1 project | /r/MachineLearning | 22 Jan 2022
    That doesn't mean there's no use case for PINNs, we wrote a giant review-ish kind of thing on NeuralPDE.jl to describe where PINNs might be useful. It's just... not the best for publishing. It's things like, (a) where you have not already optimized a classical method, (b) need something that's easy to generate solvers for different cases without too much worry about stability, (c) high dimensional PDEs, and (d) surrogates over parameters. (c) and (d) are the two "real" uses cases you can actually publish about, but they aren't quite good for (c) (see mesh-free methods from the old radial basis function literature in comparison) or (d) (there are much faster surrogate techniques). So we are continuing to work on them for (a) and (b) as an interesting option as part of a software suite, but that's not the kind of thing that's really publishable so I don't think we plan to ever submit that article anywhere.
  • [N] Open Colloquium by Prof. Max Welling: "Is the next deep learning disruption in the physical sciences?"
    1 project | /r/MachineLearning | 21 Oct 2021
  • [D] What are some ideas that are hyped up in machine learning research but don't actually get used in industry (and vice versa)?
    1 project | /r/MachineLearning | 16 Oct 2021
    Did this change at all with the advent of Physics Informed Neural Networks? The Julia language has some really impressive tools for that use case. https://github.com/SciML/NeuralPDE.jl
  • [Research] Input Arbitrary PDE -> Output Approximate Solution
    4 projects | /r/MachineLearning | 10 Jul 2021
    PDEs are difficult because you don't have a simple numerical definition over all PDEs because they can be defined by arbitrarily many functions. u' = Laplace u + f? Define f. u' = g(u) * Laplace u + f? Define f and g. Etc. To cover the space of PDEs you have to go symbolic at some point, and make the discretization methods dependent on the symbolic form. This is precisely what the ModelingToolkit.jl ecosystem is doing. One instantiation of a discretizer on this symbolic form is NeuralPDE.jl which takes a symbolic PDESystem and generates an OptimizationProblem for a neural network which represents the solution via a Physics-Informed Neural Network (PINN).
  • [D] Has anyone worked with Physics Informed Neural Networks (PINNs)?
    3 projects | /r/MachineLearning | 21 May 2021
    NeuralPDE.jl fully automates the approach (and extensions of it, which are required to make it solve practical problems) from symbolic descriptions of PDEs, so that might be a good starting point to both learn the practical applications and get something running in a few minutes. As part of MIT 18.337 Parallel Computing and Scientific Machine Learning I gave an early lecture on physics-informed neural networks (with a two part video) describing the approach, how it works and what its challenges are. You might find those resources enlightening.
  • Doing Symbolic Math with SymPy
    8 projects | news.ycombinator.com | 8 Jan 2021
    What is great about ModelingToolkit.jl is how its used in practical ways for other packages. E.g. NeuralPDE.jl.

    Compared to SymPy, I feel that it is less of a "how do I integrate this function" package and more about "how can I build this DSL" framework.

    https://github.com/SciML/NeuralPDE.jl

AMDGPU.jl

Posts with mentions or reviews of AMDGPU.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-12.
  • Why is AMD leaving ML to nVidia?
    9 projects | /r/Amd | 12 Apr 2023
    For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.
  • [GUIDE] How to install ROCm for GPU Julia programming via Distrobox
    3 projects | /r/steamdeck_linux | 3 Jan 2023
    The Julia package AMDGPU.jl provides a Julia interface for AMD GPU (ROCm) programming. As they say, the package is being developed for Julia 1.7, 1.9 and above, but not 1.8. Therefore I downloaded the Julia binary of version 1.7.3 from the older releases Julia page.
  • First True Exascale Supercomputer
    2 projects | news.ycombinator.com | 6 Jul 2022
    This is exciting news! What's also exciting is that it's not just C++ that can run on this supercomputer; there is also good (currently unofficial) support for programming those GPUs from Julia, via the AMDGPU.jl library (note: I am the author/maintainer of this library). Some of our users have been able to run AMDGPU.jl's testsuite on the Crusher test system (which is an attached testing system with the same hardware configuration as Frontier), as well as their own domain-specific programs that use AMDGPU.jl.

    What's nice about programming GPUs in Julia is that you can write code once and execute it on multiple kinds of GPUs, with excellent performance. The KernelAbstractions.jl library makes this possible for compute kernels by acting as a frontend to AMDGPU.jl, CUDA.jl, and soon Metal.jl and oneAPI.jl, allowing a single piece of code to be portable to AMD, NVIDIA, Intel, and Apple GPUs, and also CPUs. Similarly, the GPUArrays.jl library allows the same behavior for idiomatic array operations, and will automatically dispatch calls to BLAS, FFT, RNG, linear solver, and DNN vendor-provided libraries when appropriate.

    I'm personally looking forward to helping researchers get their Julia code up and running on Frontier so that we can push scientific computing to the max!

    Library link: <https://github.com/JuliaGPU/AMDGPU.jl>

  • IA et Calcul scientifique dans Kubernetes avec le langage Julia, K8sClusterManagers.jl
    11 projects | dev.to | 12 Mar 2022
    GitHub - JuliaGPU/AMDGPU.jl: AMD GPU (ROCm) programming in Julia
  • Cuda.jl v3.3: union types, debug info, graph APIs
    8 projects | news.ycombinator.com | 13 Jun 2021
    https://github.com/JuliaGPU/AMDGPU.jl

    https://github.com/JuliaGPU/oneAPI.jl

    These are both less mature than CUDA.jl, but are in active development.

  • Unified programming model for all devices – will it catch on?
    2 projects | news.ycombinator.com | 1 Mar 2021

What are some alternatives?

When comparing NeuralPDE.jl and AMDGPU.jl you can also consider the following projects:

deepxde - A library for scientific machine learning and physics-informed learning

Vulkan.jl - Using Vulkan from Julia

SymPy - A computer algebra system written in pure Python

oneAPI.jl - Julia support for the oneAPI programming toolkit.

ModelingToolkit.jl - An acausal modeling framework for automatically parallelized scientific machine learning (SciML) in Julia. A computer algebra system for integrated symbolics for physics-informed machine learning and automated transformations of differential equations

KernelAbstractions.jl - Heterogeneous programming in Julia

ReservoirComputing.jl - Reservoir computing utilities for scientific machine learning (SciML)

ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]

18337 - 18.337 - Parallel Computing and Scientific Machine Learning

GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.

Gridap.jl - Grid-based approximation of partial differential equations in Julia

julia-distributed-computing - The ultimate guide to distributed computing in Julia