StochasticAD.jl VS Octavian.jl

Compare StochasticAD.jl vs Octavian.jl and see what are their differences.

StochasticAD.jl

Research package for automatic differentiation of programs containing discrete randomness. (by gaurav-arya)

Octavian.jl

Multi-threaded BLAS-like library that provides pure Julia matrix multiplication (by JuliaLinearAlgebra)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
StochasticAD.jl Octavian.jl
3 17
181 222
- 0.0%
8.7 3.9
19 days ago 26 days ago
Julia Julia
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

StochasticAD.jl

Posts with mentions or reviews of StochasticAD.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-22.
  • Yann Lecun: ML would have advanced if other lang had been adopted versus Python
    9 projects | news.ycombinator.com | 22 Feb 2023
    This is disregarding the development of said ecosystems though. The point is that Python has been quite inhibitory to the development of this ecosystem. There are many corpses of automatic differentiation libraries (starting from autograd and tangent and then to things like theano to finally tensorflow and pytorch) and many corpses of JIT compilers and accelerators (Cython, Numba, pypy, and TensorFlow XLA, now PyTorch v2's JIT, etc.).

    What has been found over the last decade is that a large part of that is due to the design of the languages. Jan Vitek for example has a great talk which describes how difficult it is to write a JIT compiler for R due to certain design choices in the language (https://www.youtube.com/watch?v=VdD0nHbcyk4, or the more detailed version https://www.youtube.com/watch?v=HStF1RJOyxI). There are certain language constructs that void lots of optimizations which have to then be worked around, which is why Python JITs choose subsets of the language to avoid specific parts that are not easy to optimize or not possible to optimize. This is why each take a domain-specific subset, a different subset of the language for numba vs jax vs etc., to choose something that is nice for ML vs for more generic codes.

    With all of that, it's perfectly reasonable to point out that there have been languages which have been designed to not have the compilation difficulties, which have resulted having a single (JIT) compiler for the language. And by extension, it has made building machine learning and autodiff libraries not something that's a Google or Meta scale project (for example, PyTorch involves building GPU code bindings and a specialized JIT, not something very accessible). Julia is a language to point to here, but I think well-designed static languages like Rust also deserve a mention. How much further would we have gone if every new ML project didn't build a new compiler and a new automatic differentiation engine? What if the development was more modular and people could easy just work on the one thing they cared about?

    As a nice example, for last NeurIPS we put out a paper on automatic differentiation of discrete stochastic models, i.e. extending AD to automatically handle cases like agent-based models. The code is open source (https://github.com/gaurav-arya/StochasticAD.jl), and you can see it's almost all written by a (talented) undergraduate over a span of about 6 months. It requires the JIT compilation because it works on a lot of things that are not solely in big matrix multiplication GPU kernels, but Julia provides that. And multiple dispatch gives GPU support. Done. The closest thing in PyTorch, storchastic, gets exponential scaling instead of StochasticAD's linear, and isn't quite compatible with a lot of what's required for ML, so it benchmarks as thousands of times slower than the simple Julia code. Of course, when Meta needs it they can and will put the minds of 5-10 top PhDs on it to build it out into a feature of PyTorch over 2 years and have a nice release. But at the end of the day we really need to ask, is that how it should be?

  • [P] Stochastic Differentiable Programming: Unbiased Automatic Differentiation for Discrete Stochastic Programs (such as particle filters, agent-based models, and more!)
    3 projects | /r/MachineLearning | 18 Oct 2022
    Found relevant code at https://github.com/gaurav-arya/StochasticAD.jl + all code implementations here

Octavian.jl

Posts with mentions or reviews of Octavian.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-22.
  • Yann Lecun: ML would have advanced if other lang had been adopted versus Python
    9 projects | news.ycombinator.com | 22 Feb 2023
  • Julia 1.8 has been released
    8 projects | news.ycombinator.com | 18 Aug 2022
    For some examples of people porting existing C++ Fortran libraries to julia, you should check out https://github.com/JuliaLinearAlgebra/Octavian.jl, https://github.com/dgleich/GenericArpack.jl, https://github.com/apache/arrow-julia (just off the top of my head). These are all ports of C++ or Fortran libraries that match (or exceed) performance of the original, and in the case of Arrow.jl is faster, more general, and 10x less code.
  • Why Julia matrix multiplication so slow in this test?
    2 projects | /r/Julia | 31 May 2022
    Note that a performance-optimized Julia implementation is on par or even outperform the specialized high-performance BLAS libraries, see https://github.com/JuliaLinearAlgebra/Octavian.jl .
  • Multiple dispatch: Common Lisp vs Julia
    4 projects | /r/Julia | 5 Mar 2022
    If you look at the thread for your first reference, there were a large number of performance improvements suggested that resulted in a 30x speedup when combined. I'm not sure what you're looking at for your second link, but Julia is faster than Lisp in n-body, spectral norm, mandelbrot, pidigits, regex, fasta, k-nucleotide, and reverse compliment benchmarks. (8 out of 10). For Julia going faster than C/Fortran, I would direct you to https://github.com/JuliaLinearAlgebra/Octavian.jl which is a julia program that beats MKL and openblas for matrix multiplication (which is one of the most heavily optimized algorithms in the world).
  • Why Fortran is easy to learn
    19 projects | news.ycombinator.com | 7 Jan 2022
    > But in the end, it's FORTRAN all the way down. Even in Julia.

    That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl

  • Show HN: prometeo – a Python-to-C transpiler for high-performance computing
    19 projects | news.ycombinator.com | 17 Nov 2021
    Well IMO it can definitely be rewritten in Julia, and to an easier degree than python since Julia allows hooking into the compiler pipeline at many areas of the stack. It's lispy an built from the ground up for codegen, with libraries like (https://github.com/JuliaSymbolics/Metatheory.jl) that provide high level pattern matching with e-graphs. The question is whether it's worth your time to learn Julia to do so.

    You could also do it at the LLVM level: https://github.com/JuliaComputingOSS/llvm-cbe

    For interesting takes on that, you can see https://github.com/JuliaLinearAlgebra/Octavian.jl which relies on loopvectorization.jl to do transforms on Julia AST beyond what LLVM does. Because of that, Octavian.jl beats openblas on many linalg benchmarks

  • Python behind the scenes #13: the GIL and its effects on Python multithreading
    2 projects | news.ycombinator.com | 29 Sep 2021
    The initial results are that libraries like LoopVectorization can already generate optimal micro-kernels, and is competitive with MKL (for square matrix-matrix multiplication) up to around size 512. With help on macro-kernel side from Octavian, Julia is able to outperform MKL for sizes up to to 1000 or so (and is about 20% slower for bigger sizes). https://github.com/JuliaLinearAlgebra/Octavian.jl.
  • From Julia to Rust
    14 projects | news.ycombinator.com | 5 Jun 2021
    > The biggest reason is because some function of the high level language is incompatible with the application domain. Like garbage collection in hot or real-time code or proprietary compilers for processors. Julia does not solve these problems.

    The presence of garbage collection in julia is not a problem at all for hot, high performance code. There's nothing stopping you from manually managing your memory in julia.

    The easiest way would be to just preallocate your buffers and hold onto them so they don't get collected. Octavian.jl is a BLAS library written in julia that's faster than OpenBLAS and MKL for small matrices and saturates to the same speed for very large matrices [1]. These are some of the hottest loops possible!

    For true, hard-real time, yes julia is not a good choice but it's perfectly fine for soft realtime.

    [1] https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/24#...

  • Julia 1.6 addresses latency issues
    5 projects | news.ycombinator.com | 25 May 2021
    If you want performance benchmarks vs Fortran, https://benchmarks.sciml.ai/html/MultiLanguage/wrapper_packa... has benchmarks with Julia out-performing highly optimized Fortran DiffEq solvers, and https://github.com/JuliaLinearAlgebra/Octavian.jl shows that pure Julia BLAS implementations can compete with MKL and openBLAS, which are among the most heavily optimized pieces of code ever written. Furthermore, Julia has been used on some of the world's fastest super-computers (in the performance critical bits), which as far as I know isn't true of Swift/Kotlin/C#.

    Expressiveness is hard to judge objectively, but in my opinion at least, Multiple Dispatch is a massive win for writing composable, re-usable code, and there really isn't anything that compares on that front to Julia.

  • Octavian.jl – BLAS-like Julia procedures for CPU
    1 project | news.ycombinator.com | 23 May 2021

What are some alternatives?

When comparing StochasticAD.jl and Octavian.jl you can also consider the following projects:

Agents.jl - Agent-based modeling framework in Julia

OpenBLAS - OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.

julia - The Julia Programming Language

Symbolics.jl - Symbolic programming for the next generation of numerical software

RecursiveFactorization

owl - Owl - OCaml Scientific Computing @ https://ocaml.xyz

Zygote.jl - 21st century AD

Verilog.jl - Verilog for Julia

Distributions.jl - A Julia package for probability distributions and associated functions.

Automa.jl - A julia code generator for regular expressions

StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)

prometeo - An experimental Python-to-C transpiler and domain specific language for embedded high-performance computing