SciPy
Flux.jl
Our great sponsors
SciPy | Flux.jl | |
---|---|---|
50 | 22 | |
12,407 | 4,386 | |
1.5% | 0.9% | |
9.9 | 8.7 | |
3 days ago | 1 day ago | |
Python | Julia | |
BSD 3-clause "New" or "Revised" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SciPy
-
What Is a Schur Decomposition?
I guess it is a rite of passage to rewrite it. I'm doing it for SciPy too together with Propack in [1]. Somebody already mentioned your repo. Thank you for your efforts.
-
Fortran codes are causing problems
Fortran codes have caused many problems for the Python package Scipy, and some of them are now being rewritten in C: e.g., https://github.com/scipy/scipy/pull/19121. Not only does R have many Fortran codes, there are also many R packages using Fortran codes: https://github.com/r-devel/r-svn, https://github.com/cran?q=&type=&language=fortran&sort=. Modern Fortran is a fine language but most legacy Fortran codes use the F77 style. When I update the R package quantreg, which uses many Fortran codes, I get a lot of warning messages. Not sure how the Fortran codes in the R ecosystem will be dealt with in the future, but they recently caused an issue in R due to the lack of compiler support for Fortran: https://blog.r-project.org/2023/08/23/will-r-work-on-64-bit-arm-windows/index.html. Some renowned packages like glmnet already have their Fortran codes rewritten in C/C++: https://cran.r-project.org/web/packages/glmnet/news/news.html
-
[D] Which BLAS library to choose for apple silicon?
There are several lessons here: a) vanilla conda-forge numpy and scipy versions come with openblas, and it works pretty well, b) do not use netlib unless your matrices are small and you need to do a lot of SVDs, or idek why c) Apple's veclib/accelerate is super fast, but it is also numerically unstable. So much so that the scipy's devs dropped any support of it back in 2018. Like dang. That said, they are apparently are bring it back in, since the 13.3 release of macOS Ventura saw some major improvements in accelerate performance.
-
SciPy: Interested in adopting PRIMA, but little appetite for more Fortran code
First, if you read through that scipy issue (https://github.com/scipy/scipy/issues/18118 ) the author was willing and able to relicense PRIMA under a 3-clause BSD license which is perfectly acceptable for scipy.
For the numerical recipes reference, there is a mention that scipy uses a slightly improved version of Powell's algorithm that is originally due to Forman Acton and presumably published in his popular book on numerical analysis, and that also happens to be described & included in numerical recipes. That is, unless the code scipy uses is copied from numerical recipes, which I presume it isn't, NR having the same algorithm doesn't mean that every other independent implementation of that algorithm falls under NR copyright.
- numerically evaluating wavelets?
- Fortran in SciPy: Get rid of linalg.interpolative Fortran code
-
Optimization Without Using Derivatives
Reading the discussions under a previous thread titled "More Descent, Less Gradient"( https://news.ycombinator.com/item?id=23004026 ), I guess people might be interested in PRIMA ( www.libprima.net ), which provides the reference implementation for Powell's renowned gradient/derivative-free (zeroth-order) optimization methods, namely COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA.
PRIMA solves general nonlinear optimizaton problems without using derivatives. It implements Powell's solvers in modern Fortran, compling with the Fortran 2008 standard. The implementation is faithful, in the sense of being mathmatically equivalent to Powell's Fortran 77 implementation, but with a better numerical performance. In contrast to the 7939 lines of Fortran 77 code with 244 GOTOs, the new implementation is structured and modularized.
There is a discussion to include the PRIMA solvers into SciPy ( https://github.com/scipy/scipy/issues/18118 ), replacing the buggy and unmaintained Fortran 77 version of COBYLA, and making the other four solvers available to all SciPy users.
- What can I contribute to SciPy (or other) with my pure math skill? I’m pen and paper mathematician
-
Emerging Technologies: Rust in HPC
if that makes your eyes bleed, what do you think about this? https://github.com/scipy/scipy/blob/main/scipy/special/specfun/specfun.f (heh)
- Python
Flux.jl
- Julia 1.10 Released
-
What Apple hardware do I need for CUDA-based deep learning tasks?
If you are really committed to running on Apple hardware then take a look at Tensorflow for macOS. Another option is the Julia programming language which has very basic Metal support at a CUDA-like level. FluxML would be the ML framework in Julia. I’m not sure either option will be painless or let you do everything you could do with a Nvidia GPU.
-
[D] ClosedAI license, open-source license which restricts only OpenAI, Microsoft, Google, and Meta from commercial use
Flux dominance!
-
What would be your programming language of choice to implement a JIT compiler ?
I’m no compiler expert but check out flux and zygote https://fluxml.ai/ https://fluxml.ai/
-
Any help or tips for Neural Networks on Computer Clusters
I would suggest you to look into Julia ecosystem instead of C++. Julia is almost identical to Python in terms of how you use it but it's still very fast. You should look into flux.jl package for Julia.
-
[D] Why are we stuck with Python for something that require so much speed and parallelism (neural networks)?
Give Julia a try: https://fluxml.ai
-
Deep Learning With Flux: Loss Doesn't Converge
2) Flux treats softmax a little different than most other activation functions (see here for more details) such as relu and sigmoid. When you pass an activation function into a layer like Dense(3, 32, relu), Flux expects that the function is broadcast over the layer's output. However, softmax cannot be broadcast as it operates over vectors rather than scalars. This means that if you want to use softmax as the final activation in your model, you need to pass it into Chain() like so:
-
“Why I still recommend Julia”
Can you point to a concrete example of one that someone would run into when using the differential equation solvers with the default and recommended Enzyme AD for vector-Jacobian products? I'd be happy to look into it, but there do not currently seem to be any correctness issues in the Enzyme issue tracker that are current (3 issues are open but they all seem to be fixed, other than https://github.com/EnzymeAD/Enzyme.jl/issues/278 which is actually an activity analysis bug in LLVM). So please be more specific. The issue with Enzyme right now seems to moreso be about finding functional forms that compile, and it throws compile-time errors in the event that it cannot fully analyze the program and if it has too much dynamic behavior (example: https://github.com/EnzymeAD/Enzyme.jl/issues/368).
Additional note, we recently did a overhaul of SciMLSensitivity (https://sensitivity.sciml.ai/dev/) and setup a system which amounts to 15 hours of direct unit tests doing a combinatoric check of arguments with 4 hours of downstream testing (https://github.com/SciML/SciMLSensitivity.jl/actions/runs/25...). What that identified is that any remaining issues that can arise are due to the implicit parameters mechanism in Zygote (Zygote.params). To counteract this upstream issue, we (a) try to default to never default to Zygote VJPs whenever we can avoid it (hence defaulting to Enzyme and ReverseDiff first as previously mentioned), and (b) put in a mechanism for early error throwing if Zygote hits any not implemented derivative case with an explicit error message (https://github.com/SciML/SciMLSensitivity.jl/blob/v7.0.1/src...). We have alerted the devs of the machine learning libraries, and from this there has been a lot of movement. In particular, a globals-free machine learning library, Lux.jl, was created with fully explicit parameters https://lux.csail.mit.edu/dev/, and thus by design it cannot have this issue. In addition, the Flux.jl library itself is looking to do a redesign that eliminates implicit parameters (https://github.com/FluxML/Flux.jl/issues/1986). Which design will be the one in the end, that's uncertain right now, but it's clear that no matter what the future designs of the deep learning libraries will fully cut out that part of Zygote.jl. And additionally, the other AD libraries (Enzyme and Diffractor for example) do not have this "feature", so it's an issue that can only arise from a specific (not recommended) way of using Zygote (which now throws explicit error messages early and often if used anywhere near SciML because I don't tolerate it).
So from this, SciML should be rather safe and if not, please share some details and I'd be happy to dig in.
- Flux: The Elegant Machine Learning Stack
-
Jax vs. Julia (Vs PyTorch)
> In his item #1, he links to https://discourse.julialang.org/t/loaderror-when-using-inter... The issue is actually a Zygote bug, a Julia package for auto-differentiation, and is not directly related to Julia codebase (or Flux package) itself. Furthermore, the problematic code is working fine now, because DiffEqFlux has switched to Enzyme, which doesn't have that bug. He should first confirm whether the problem he is citing is actually a problem or not.
> Item #2, again another Zygote bug.
If flux chose a buggy package as a dependency, that's on them, and users are well justified in steering clear of Flux if it has buggy dependencies. As of today, the Project.toml for both Flux and DiffEqFlux still lists Zygote as a dependency. Neither list Enzyme.
What are some alternatives?
SymPy - A computer algebra system written in pure Python
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
statsmodels - Statsmodels: statistical modeling and econometrics in Python
Knet.jl - Koç University deep learning framework.
NumPy - The fundamental package for scientific computing with Python.
tensorflow - An Open Source Machine Learning Framework for Everyone
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
Transformers.jl - Julia Implementation of Transformer models
astropy - Astronomy and astrophysics core library
Lux.jl - Explicitly Parameterized Neural Networks in Julia
or-tools - Google's Operations Research tools:
Torch.jl - Sensible extensions for exposing torch in Julia.