Torch.jl
SciPyDiffEq.jl
Torch.jl | SciPyDiffEq.jl | |
---|---|---|
6 | 4 | |
205 | 21 | |
2.0% | - | |
4.2 | 4.8 | |
11 days ago | 8 days ago | |
Julia | Julia | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Torch.jl
- Julia 1.10 Released
- Julia 1.9: A New Era of Performance and Flexibility
- How usable is Julia for Natural Language Processing Machine learning?
-
Does Julia Have a Chance to Overthrown Python in the Machine Learning Industry?
For frontends Python has quite some head-start. In principle it would be possible to write Julia frond-ends to existing ML libraries (written e.g. in C), for example https://github.com/FluxML/Torch.jl , but the advantages over Python frontends would be very limited. Only a front-to-back Julia implementation leverages most of the language advantages like composibility and flexibility.
-
Julia: faster than Fortran, cleaner than Numpy
PyTorch for example is a C++ library with a Python user interface, see e.g. the language shares in GitHub (https://github.com/pytorch/pytorch ). There is also a Julia binding for Torch (https://github.com/FluxML/Torch.jl), but I do not know how up-to-date it is.
SciPyDiffEq.jl
-
Good linear algebra libraries
Check out the SciML ecosystem. They are doing amazing work in that space. You might also want to integrate your methods with their libraries, as it will boost their potential audience massively. https://sciml.ai/
-
SciPy: Interested in adopting PRIMA, but little appetite for more Fortran code
Interesting response. I develop the Julia SciML organization https://sciml.ai/ and we'd be more than happy to work with you to get wrappers for PRIMA into Optimization.jl's general interface (https://docs.sciml.ai/Optimization/stable/). Please get in touch and we can figure out how to set this all up. I personally would be curious to try this out and do some benchmarks against nlopt methods.
-
Julia 1.9: A New Era of Performance and Flexibility
Overall, your analysis is very Python centric. It's not very clear to me why Julia should focus on convincing Python users or developers. There are many areas of numerical and scientific computing that are not well served by Python, and it's exactly those areas that Julia is pushing into. The whole SciML https://sciml.ai/ ecosystem is a great toolbox for writing models and optimizations that would have otherwise required FORTRAN, C, and MATLAB. Staying within Julia provides access to a consistent set of autodiff technologies to further accelerate those efforts.
-
Can Fortran survive another 15 years?
What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.
> If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.
It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....
> you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations
You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?
I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).
If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.
What are some alternatives?
Flux.jl - Relax! Flux is the ML library that doesn't make you tensor
PowerSimulationsDynamics.jl - Julia package to run Dynamic Power System simulations. Part of the Scalable Integrated Infrastructure Planning Initiative at the National Renewable Energy Lab.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
KiteSimulators.jl - Simulators for kite power systems
gluon-nlp - NLP made easy
Optimization.jl - Mathematical Optimization in Julia. Local, global, gradient-based and derivative-free. Linear, Quadratic, Convex, Mixed-Integer, and Nonlinear Optimization in one simple, fast, and differentiable interface.
JuliaTorch - Using PyTorch in Julia Language
SciMLBenchmarks.jl - Scientific machine learning (SciML) benchmarks, AI for science, and (differential) equation solvers. Covers Julia, Python (PyTorch, Jax), MATLAB, R
threads - Threads for Lua and LuaJIT. Transparent exchange of data between threads is allowed thanks to torch serialization.
fpm - Fortran Package Manager (fpm)
Lux.jl - Explicitly Parameterized Neural Networks in Julia
prima - PRIMA is a package for solving general nonlinear optimization problems without using derivatives. It provides the reference implementation for Powell's derivative-free optimization methods, i.e., COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA. PRIMA means Reference Implementation for Powell's methods with Modernization and Amelioration, P for Powell.