Faster `matrixmultiply` ?

This page summarizes the projects mentioned and recommended in the original post on /r/rust

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • matrixmultiply

    General matrix multiplication of f32 and f64 matrices in Rust. Supports matrices with general strides.

  • There's a famous crate [matrixmultiply](https://github.com/bluss/matrixmultiply) for matrix-matrix multiplication in Rust. But it's a bit slow for me.

  • matrixmultiply_mt

    A Multithreaded, processor specialized, fork of the matrixmultiply crate

  • I forked into matrixmultiply_mt to improve performance and add multithreading. However, I cant recommend it as the original library has added a few extra kernels and layout optimizations and I recently found that the autovectorisation had broken in newer rustc versions so it is now slower than the original.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • cblas-sys

    Bindings to CBLAS (C)

  • I've switched to just using the AMD BLIS library, and linking through cblas-sys. One day I would like to rewrite a matmul and convolution library with packed-simd-2 or portable-simd when they and const generics are finished.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts