SciPy
NeuralPDE.jl
Our great sponsors
SciPy | NeuralPDE.jl | |
---|---|---|
50 | 10 | |
12,459 | 903 | |
1.9% | 2.8% | |
9.9 | 9.7 | |
1 day ago | 5 days ago | |
Python | Julia | |
BSD 3-clause "New" or "Revised" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SciPy
-
What Is a Schur Decomposition?
I guess it is a rite of passage to rewrite it. I'm doing it for SciPy too together with Propack in [1]. Somebody already mentioned your repo. Thank you for your efforts.
[1]: https://github.com/scipy/scipy/issues/18566
-
Fortran codes are causing problems
Fortran codes have caused many problems for the Python package Scipy, and some of them are now being rewritten in C: e.g., https://github.com/scipy/scipy/pull/19121. Not only does R have many Fortran codes, there are also many R packages using Fortran codes: https://github.com/r-devel/r-svn, https://github.com/cran?q=&type=&language=fortran&sort=. Modern Fortran is a fine language but most legacy Fortran codes use the F77 style. When I update the R package quantreg, which uses many Fortran codes, I get a lot of warning messages. Not sure how the Fortran codes in the R ecosystem will be dealt with in the future, but they recently caused an issue in R due to the lack of compiler support for Fortran: https://blog.r-project.org/2023/08/23/will-r-work-on-64-bit-arm-windows/index.html. Some renowned packages like glmnet already have their Fortran codes rewritten in C/C++: https://cran.r-project.org/web/packages/glmnet/news/news.html
-
[D] Which BLAS library to choose for apple silicon?
There are several lessons here: a) vanilla conda-forge numpy and scipy versions come with openblas, and it works pretty well, b) do not use netlib unless your matrices are small and you need to do a lot of SVDs, or idek why c) Apple's veclib/accelerate is super fast, but it is also numerically unstable. So much so that the scipy's devs dropped any support of it back in 2018. Like dang. That said, they are apparently are bring it back in, since the 13.3 release of macOS Ventura saw some major improvements in accelerate performance.
-
SciPy: Interested in adopting PRIMA, but little appetite for more Fortran code
First, if you read through that scipy issue (https://github.com/scipy/scipy/issues/18118 ) the author was willing and able to relicense PRIMA under a 3-clause BSD license which is perfectly acceptable for scipy.
For the numerical recipes reference, there is a mention that scipy uses a slightly improved version of Powell's algorithm that is originally due to Forman Acton and presumably published in his popular book on numerical analysis, and that also happens to be described & included in numerical recipes. That is, unless the code scipy uses is copied from numerical recipes, which I presume it isn't, NR having the same algorithm doesn't mean that every other independent implementation of that algorithm falls under NR copyright.
- numerically evaluating wavelets?
- Fortran in SciPy: Get rid of linalg.interpolative Fortran code
-
Optimization Without Using Derivatives
Reading the discussions under a previous thread titled "More Descent, Less Gradient"( https://news.ycombinator.com/item?id=23004026 ), I guess people might be interested in PRIMA ( www.libprima.net ), which provides the reference implementation for Powell's renowned gradient/derivative-free (zeroth-order) optimization methods, namely COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA.
PRIMA solves general nonlinear optimizaton problems without using derivatives. It implements Powell's solvers in modern Fortran, compling with the Fortran 2008 standard. The implementation is faithful, in the sense of being mathmatically equivalent to Powell's Fortran 77 implementation, but with a better numerical performance. In contrast to the 7939 lines of Fortran 77 code with 244 GOTOs, the new implementation is structured and modularized.
There is a discussion to include the PRIMA solvers into SciPy ( https://github.com/scipy/scipy/issues/18118 ), replacing the buggy and unmaintained Fortran 77 version of COBYLA, and making the other four solvers available to all SciPy users.
- What can I contribute to SciPy (or other) with my pure math skill? I’m pen and paper mathematician
-
Emerging Technologies: Rust in HPC
if that makes your eyes bleed, what do you think about this? https://github.com/scipy/scipy/blob/main/scipy/special/specfun/specfun.f (heh)
- Python
NeuralPDE.jl
-
Automatically install huge number of dependency?
The documentation has a manifest associated with it: https://docs.sciml.ai/NeuralPDE/dev/#Reproducibility. Instantiating the manifest will give you all of the exact versions used for the documentation build (https://github.com/SciML/NeuralPDE.jl/blob/gh-pages/v5.7.0/assets/Manifest.toml). You just ]instantiate folder_of_manifest. Or you can use the Project.toml.
-
from Wolfram Mathematica to Julia
PDE solving libraries are MethodOfLines.jl and NeuralPDE.jl. NeuralPDE is very general but not very fast (it's a limitation of the method, PINNs are just slow). MethodOfLines is still somewhat under development but generates quite fast code.
-
IA et Calcul scientifique dans Kubernetes avec le langage Julia, K8sClusterManagers.jl
GitHub - SciML/NeuralPDE.jl: Physics-Informed Neural Networks (PINN) and Deep BSDE Solvers of Differential Equations for Scientific Machine Learning (SciML) accelerated simulation
-
[D] ICLR 2022 RESULTS ARE OUT
That doesn't mean there's no use case for PINNs, we wrote a giant review-ish kind of thing on NeuralPDE.jl to describe where PINNs might be useful. It's just... not the best for publishing. It's things like, (a) where you have not already optimized a classical method, (b) need something that's easy to generate solvers for different cases without too much worry about stability, (c) high dimensional PDEs, and (d) surrogates over parameters. (c) and (d) are the two "real" uses cases you can actually publish about, but they aren't quite good for (c) (see mesh-free methods from the old radial basis function literature in comparison) or (d) (there are much faster surrogate techniques). So we are continuing to work on them for (a) and (b) as an interesting option as part of a software suite, but that's not the kind of thing that's really publishable so I don't think we plan to ever submit that article anywhere.
- [N] Open Colloquium by Prof. Max Welling: "Is the next deep learning disruption in the physical sciences?"
-
[D] What are some ideas that are hyped up in machine learning research but don't actually get used in industry (and vice versa)?
Did this change at all with the advent of Physics Informed Neural Networks? The Julia language has some really impressive tools for that use case. https://github.com/SciML/NeuralPDE.jl
-
[Research] Input Arbitrary PDE -> Output Approximate Solution
PDEs are difficult because you don't have a simple numerical definition over all PDEs because they can be defined by arbitrarily many functions. u' = Laplace u + f? Define f. u' = g(u) * Laplace u + f? Define f and g. Etc. To cover the space of PDEs you have to go symbolic at some point, and make the discretization methods dependent on the symbolic form. This is precisely what the ModelingToolkit.jl ecosystem is doing. One instantiation of a discretizer on this symbolic form is NeuralPDE.jl which takes a symbolic PDESystem and generates an OptimizationProblem for a neural network which represents the solution via a Physics-Informed Neural Network (PINN).
-
[D] Has anyone worked with Physics Informed Neural Networks (PINNs)?
NeuralPDE.jl fully automates the approach (and extensions of it, which are required to make it solve practical problems) from symbolic descriptions of PDEs, so that might be a good starting point to both learn the practical applications and get something running in a few minutes. As part of MIT 18.337 Parallel Computing and Scientific Machine Learning I gave an early lecture on physics-informed neural networks (with a two part video) describing the approach, how it works and what its challenges are. You might find those resources enlightening.
-
Doing Symbolic Math with SymPy
What is great about ModelingToolkit.jl is how its used in practical ways for other packages. E.g. NeuralPDE.jl.
Compared to SymPy, I feel that it is less of a "how do I integrate this function" package and more about "how can I build this DSL" framework.
https://github.com/SciML/NeuralPDE.jl
What are some alternatives?
SymPy - A computer algebra system written in pure Python
deepxde - A library for scientific machine learning and physics-informed learning
statsmodels - Statsmodels: statistical modeling and econometrics in Python
NumPy - The fundamental package for scientific computing with Python.
ModelingToolkit.jl - An acausal modeling framework for automatically parallelized scientific machine learning (SciML) in Julia. A computer algebra system for integrated symbolics for physics-informed machine learning and automated transformations of differential equations
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
ReservoirComputing.jl - Reservoir computing utilities for scientific machine learning (SciML)
astropy - Astronomy and astrophysics core library
AMDGPU.jl - AMD GPU (ROCm) programming in Julia
or-tools - Google's Operations Research tools:
18337 - 18.337 - Parallel Computing and Scientific Machine Learning