Faster `matrixmultiply` ?

This page summarizes the projects mentioned and recommended in the original post on reddit.com/r/rust

Our great sponsors
  • ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
  • SonarQube - Static code analysis for 29 languages.
  • InfluxDB - Access the most powerful time series database as a service
  • matrixmultiply

    General matrix multiplication of f32 and f64 matrices in Rust. Supports matrices with general strides.

    There's a famous crate [matrixmultiply](https://github.com/bluss/matrixmultiply) for matrix-matrix multiplication in Rust. But it's a bit slow for me.

  • matrixmultiply_mt

    A Multithreaded, processor specialized, fork of the matrixmultiply crate

    I forked into matrixmultiply_mt to improve performance and add multithreading. However, I cant recommend it as the original library has added a few extra kernels and layout optimizations and I recently found that the autovectorisation had broken in newer rustc versions so it is now slower than the original.

  • ONLYOFFICE

    ONLYOFFICE Docs — document collaboration in your environment. Powerful document editing and collaboration in your app or environment. Ultimate security, API and 30+ ready connectors, SaaS or on-premises

  • cblas-sys

    Bindings to CBLAS (C)

    I've switched to just using the AMD BLIS library, and linking through cblas-sys. One day I would like to rewrite a matmul and convolution library with packed-simd-2 or portable-simd when they and const generics are finished.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts