Enzyme VS rust-ndarray

Compare Enzyme vs rust-ndarray and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
Enzyme rust-ndarray
16 20
1,157 3,319
3.4% 3.3%
9.7 8.2
2 days ago 14 days ago
LLVM Rust
GNU General Public License v3.0 or later Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Enzyme

Posts with mentions or reviews of Enzyme. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-12-06.
  • Show HN: Curve Fitting Bezier Curves in WASM with Enzyme Ad
    1 project | news.ycombinator.com | 13 Oct 2023
    Automatic differentiation is done using https://enzyme.mit.edu/
  • Ask HN: What Happened to TensorFlow Swift
    1 project | news.ycombinator.com | 27 May 2023
    lattner left google and was the primary reason they chose swift, so they lost interest.

    if you're asking from an ML perspective, i believe the original motivation was to incorporate automatic differentiation in the swift compiler. i believe enzyme is the spiritual successor.

    https://github.com/EnzymeAD/Enzyme

  • Show HN: Port of OpenAI's Whisper model in C/C++
    9 projects | news.ycombinator.com | 6 Dec 2022
    https://ispc.github.io/ispc.html

    For the auto-differentiation when I need performance or memory, I currently use tapenade ( http://tapenade.inria.fr:8080/tapenade/index.jsp ) and/or manually written gradient when I need to fuse some kernel, but Enzyme ( https://enzyme.mit.edu/ ) is also very promising.

    MPI for parallelization across machines.

  • Do you consider making a physics engine (for RL) worth it?
    3 projects | /r/rust | 8 Oct 2022
    For autodiff, we are currently working again on publishing a new Enzyme (https://enzyme.mit.edu) Frontend for Rust which can also handle pure Rust types, first version should be done in ~ a week.
  • What is a really cool thing you would want to write in Rust but don't have enough time, energy or bravery for?
    21 projects | /r/rust | 8 Jun 2022
    Have you taken a look at enzymeAD? There is a group porting it to rust.
  • The Julia language has a number of correctness flaws
    19 projects | news.ycombinator.com | 16 May 2022
    Enzyme dev here, so take everything I say as being a bit biased:

    While, by design Enzyme is able to run very fast by operating within the compiler (see https://proceedings.neurips.cc/paper/2020/file/9332c513ef44b... for details) -- it aggressively prioritizes correctness. Of course that doesn't mean that there aren't bugs (we're only human and its a large codebase [https://github.com/EnzymeAD/Enzyme], especially if you're trying out newly-added features).

    Notably, this is where the current rough edges for Julia users are -- Enzyme will throw an error saying it couldn't prove correctness, rather than running (there is a flag for "making a best guess, but that's off by default"). The exception to this is garbage collection, for which you can either run a static analysis, or stick to the "officially supported" subset of Julia that Enzyme specifies.

    Incidentally, this is also where being a cross-language tool is really nice -- namely we can see edge cases/bug reports from any LLVM-based language (C/C++, Fortran, Swift, Rust, Python, Julia, etc). So far the biggest code we've handled (and verified correctness for) was O(1million) lines of LLVM from some C++ template hell.

    I will also add that while I absolutely love (and will do everything I can to support) Enzyme being used throughout arbitrary Julia code: in addition to exposing a nice user-facing interface for custom rules in the Enzyme Julia bindings like Chris mentioned, some Julia-specific features (such as full garbage collection support) also need handling in Enzyme.jl, before Enzyme can be considered an "all Julia AD" framework. We are of course working on all of these things (and the more the merrier), but there's only a finite amount of time in the day. [^]

    [^] Incidentally, this is in contrast to say C++/Fortran/Swift/etc, where Enzyme has much closer to whole-language coverage than Julia -- this isn't anything against GC/Julia/etc, but we just have things on our todo list.

  • Jax vs. Julia (Vs PyTorch)
    4 projects | news.ycombinator.com | 4 May 2022
    Idk, Enzyme is pretty next gen, all the way down to LLVM code.

    https://github.com/EnzymeAD/Enzyme

  • What's everyone working on this week (7/2022)?
    15 projects | /r/rust | 14 Feb 2022
    I'm working on merging my build-tool for (oxide)-enzyme into Enzyme itself. Also looking into improving the documentation.
  • Wsmoses/Enzyme: High-performance automatic differentiation of LLVM
    1 project | news.ycombinator.com | 22 Jan 2022
  • Trade-Offs in Automatic Differentiation: TensorFlow, PyTorch, Jax, and Julia
    7 projects | news.ycombinator.com | 25 Dec 2021
    that seems one of the points of enzyme[1], which was mentioned in the article.

    [1] - https://enzyme.mit.edu/

    being able in effect do interprocedural cross language analysis seems awesome.

rust-ndarray

Posts with mentions or reviews of rust-ndarray. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-22.
  • Some Reasons to Avoid Cython
    5 projects | news.ycombinator.com | 22 Sep 2023
    I would love some examples of how to do non-trivial data interop between Rust and Python. My experience is that PyO3/Maturin is excellent when converting between simple datatypes but conversions get difficult when there are non-standard types, e.g. Python Numpy arrays or Rust ndarrays or whatever other custom thing.

    Polars seems to have a good model where it uses the Arrow in memory format, which has implementations in Python and Rust, and makes a lot of the ndarray stuff easier. However, if the Rust libraries are not written with Arrow first, they become quite hard to work with. For example, there are many libraries written with https://github.com/rust-ndarray/ndarray, which is challenging to interop with Numpy.

    (I am not an expert at all, please correct me if my characterizations are wrong!)

  • Helper crate for working with image data of varying type?
    1 project | /r/rust | 29 May 2023
    Thanks for sharing. I read this issue on why ndarray does not have a dynamically typed array: https://github.com/rust-ndarray/ndarray/issues/651
  • What is the most efficient way to study Rust for scientific computing applications?
    1 project | /r/rust | 23 May 2023
    You can get involved with the ndarray project
  • faer 0.8.0 release
    6 projects | /r/rust | 21 Apr 2023
    Sadly Ndarray does look a little abandoned to me: https://github.com/rust-ndarray/ndarray
  • Status and Future of ndarray?
    2 projects | /r/rust | 3 Apr 2023
    The date of the last commit of [ndarray](https://github.com/rust-ndarray/ndarray) lies 6 month in the past while many recent issues are open and untouched.
  • How does explicit unrolling differ from iterating through elements one-by-one? (ndarray example)
    1 project | /r/rust | 13 Jan 2023
    While looking through ndarrays src, I came across a set of functions that explicitly unroll 8 variables on each iteration of a loop, with the comment eightfold unrolled so that floating point can be vectorized (even with strict floating point accuracy semantics). I don't understand why floats would be affected by unrolling, and in general I'm confused as to how explicit unrolling differs from iterating through each element one by one. I assumed this would be a scenario where the compiler would optimize best anyway, which seems to be confirmed (at least in the context of using iter() rather than for) here. Could anyone give a little context into what this, or any explicit unrolling achieves?
  • Announcing Burn: New Deep Learning framework with CPU & GPU support using the newly stabilized GAT feature
    7 projects | /r/rust | 6 Nov 2022
    Burn is different: it is built around the Backend trait which encapsulates tensor primitives. Even the reverse mode automatic differentiation is just a backend that wraps another one using the decorator pattern. The goal is to make it very easy to create optimized backends and support different devices and use cases. For now, there are only 3 backends: NdArray (https://github.com/rust-ndarray/ndarray) for a pure rust solution, Tch (https://github.com/LaurentMazare/tch-rs) for an easy access to CUDA and cuDNN optimized operations and the ADBackendDecorator making any backend differentiable. I am now refactoring the internal backend API to make it as easy as possible to plug in new ones.
  • Pure rust implementation for deep learning models
    3 projects | /r/rust | 9 Oct 2022
    Looks like it's an open request
  • The Illustrated Stable Diffusion
    3 projects | news.ycombinator.com | 4 Oct 2022
    https://github.com/rust-ndarray/ndarray/issues/281

    Answer: you can’t with this crate. I implemented a dynamic n-dim solution myself but it uses views of integer indices that get copied to a new array, which have indexes to another flattened array in order to avoid duplication of possibly massive amounts of n-dimensional data; using the crate alone, copying all the array data would be unavoidable.

    Ultimately I’ve had to make my own axis shifting and windowing mechanisms. But the crate is still a useful lib and continuing effort.

    While I don’t mind getting into the weeds, these kinds of side efforts can really impact context focus so it’s just something to be aware of.

  • Any efficient way of splitting vector?
    2 projects | /r/rust | 12 Sep 2022
    In principle you're trying to convert between columnar and row-based data layouts, something that happens fairly often in data science. I bet there's some hyper-efficient SIMD magic that could be invoked for these slicing operations (and maybe the iterator solution does exactly that). Might be worth taking a look at how the relevant Rust libraries like ndarray do it.

What are some alternatives?

When comparing Enzyme and rust-ndarray you can also consider the following projects:

Zygote.jl - 21st century AD

nalgebra - Linear algebra library for Rust.

Flux.jl - Relax! Flux is the ML library that doesn't make you tensor

Rust-CUDA - Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

image - Encoding and decoding images in Rust

Lux.jl - Explicitly Parameterized Neural Networks in Julia

neuronika - Tensors and dynamic neural networks in pure Rust.

linfa - A Rust machine learning framework.

utah - Dataframe structure and operations in Rust

faust - Functional programming language for signal processing and sound synthesis