Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 15 matrix-multiplication Open-Source Projects
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
XNNPACK
High-efficiency floating-point neural network inference operators for mobile, server, and Web
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
laser
The HPC toolbox: fused matrix multiplication, convolution, data-parallel strided tensor primitives, OpenMP facilities, SIMD, JIT Assembler, CPU detection, state-of-the-art vectorized BLAS for floats and integers (by mratsim)
-
sparse
Sparse matrix formats for linear algebra supporting scientific and machine learning applications
-
VBA-Expressions
A powerful string expression evaluator for VBA and LO Basic, which puts more than 100 mathematical, statistical, financial, date-time, logic and text manipulation functions at the user's fingertips.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Project mention: Algorithmic Alchemy: Exploiting Graph Theory in the Foreign Exchange | dev.to | 2023-10-05William Fiset's GitHub examples - Bellman Ford On Adjacency Matrix
Project mention: Faer-rs: Linear algebra foundation for the Rust programming language | news.ycombinator.com | 2024-04-24BLIS is an interesting new direction in that regard: https://github.com/flame/blis
>The BLAS-like Library Instantiation Software (BLIS) framework is a new infrastructure for rapidly instantiating Basic Linear Algebra Subprograms (BLAS) functionality. Its fundamental innovation is that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed and optimized in terms of very simple kernels.
Project mention: Xnnpack: High-efficiency floating-point neural network inference operators | news.ycombinator.com | 2023-12-25
git clone https://github.com/CNugteren/CLBlast.git cd CLBlast cmake . cmake --build . --config Release mkdir install cmake --install . --prefix ~/CLBlast/install cp libclblast.so* $PREFIX/lib cp ./include/clblast.h ../llama.cpp
It depends.
You need 2~3 accumulators to saturate instruction-level parallelism with a parallel sum reduction. But the compiler won't do it because it only creates those when the operation is associative, i.e. (a+b)+c = a+(b+c), which is true for integers but not for floats.
There is an escape hatch in -ffast-math.
I have extensive benches on this here: https://github.com/mratsim/laser/blob/master/benchmarks%2Ffp...
Project mention: Show HN: Stella Nera – Maddness Hardware Accelerator | news.ycombinator.com | 2023-11-21
matrix-multiplication related posts
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
-
Show HN: Stella Nera – Maddness Hardware Accelerator
-
Can't compile llama-cpp-python with CLBLAST
-
Got bored and implemented the AlphaTensor matrix multiplication algorithms in Rust with SIMD https://github.com/drbh/simd-alphatensor-rs
-
10x faster matrix and vector operations
-
BLAS-level CPU Performance in 100 Lines of C
-
I created a linear algebra library to work with matrixes and lists
-
A note from our sponsor - InfluxDB
www.influxdata.com | 5 May 2024
Index
What are some of the best open-source matrix-multiplication projects? This list will help you:
Project | Stars | |
---|---|---|
1 | Algorithms | 16,542 |
2 | blis | 2,107 |
3 | XNNPACK | 1,700 |
4 | how-to-optimize-gemm | 1,618 |
5 | neanderthal | 1,042 |
6 | CLBlast | 997 |
7 | blislab | 416 |
8 | laser | 261 |
9 | halutmatmul | 202 |
10 | sparse | 153 |
11 | sparse_dot | 68 |
12 | simd-alphatensor-rs | 58 |
13 | VBA-Expressions | 19 |
14 | vector | 13 |
15 | Comparison-multiplying-matrices | 1 |
Sponsored