RecursiveFactorization
vectorflow
RecursiveFactorization | vectorflow | |
---|---|---|
3 | 12 | |
- | 1,291 | |
- | 0.3% | |
- | 0.0 | |
- | 8 days ago | |
D | ||
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RecursiveFactorization
-
Can Fortran survive another 15 years?
What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.
> If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.
It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....
> you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations
You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?
I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).
If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.
- Yann Lecun: ML would have advanced if other lang had been adopted versus Python
-
Small Neural networks in Julia 5x faster than PyTorch
Ask them to download Julia and try it, and file an issue if it is not fast enough. We try to have the latest available.
See for example: https://github.com/JuliaLinearAlgebra/RecursiveFactorization...
vectorflow
-
Programming languages endorsed for server-side use at Meta
>> Mozilla (of course)
Mozilla is a c++ and javascript shop. What do they ship in Rust? How much of Firefox is written in rust for example?
>> Microsoft, Meta, Google/Acrobat, Amazon
Large firms have lots of devs and consequently lots of toy projects. Is their usage of rust more significant than their use of D? I mean Meta was churning out projects in D a while back (warp, flint, etc) and looked like it might be going all in at one point (they even hired one of the leads on D lang).
>> That's practically all of FAANG
Who were we missing? Netflix, they’ve dabbled with D too: https://github.com/Netflix/vectorflow
Don’t misunderstand my point - it’s not that D is more popular than rust, it’s that rust is not used for real work in any significant capacity yet.
Where’s the big project written in rust? Servo and the rust compiler are the only two large rust projects on github.
-
Cloud TPU VMs are generally available
Thanks Zak, already applied.
Just wondering does TPU VM support Vectorflow?
https://github.com/Netflix/vectorflow
- Vectorflow is a minimalist neural network library optimized for sparse data and single machine environments open sourced by Netflix (r/MachineLearning)
- [P] Vectorflow is a minimalist neural network library optimized for sparse data and single machine environments open sourced by Netflix
- Vectorflow is a minimalist neural network library optimized for sparse data and single machine environments open sourced by Netflix
- Vectorflow: Minimalist neural network library faster than TensorFlow in D
-
Small Neural networks in Julia 5x faster than PyTorch
A library I designed a few years ago (https://github.com/Netflix/vectorflow) is also much faster than pytorch/tensorflow in these cases.
In "small" or "very sparse" setups, you're memory bound, not compute bound. TF and Pytorch are bad at that because they assume memory movements are worth it and do very little in-place operations.
Different tools for different jobs.
What are some alternatives?
tiny-cuda-nn - Lightning fast C++/CUDA neural network framework
diffrax - Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/
dcompute - DCompute: Native execution of D on GPUs and other Accelerators
LeNetTorch - PyTorch implementation of LeNet for fitting MNIST for benchmarking.
KiteSimulators.jl - Simulators for kite power systems
RecursiveFactorization.jl
juliaup - Julia installer and version multiplexer
SciPyDiffEq.jl - Wrappers for the SciPy differential equation solvers for the SciML Scientific Machine Learning organization
blis - BLAS-like Library Instantiation Software Framework