array VS laser

Compare array vs laser and see what are their differences.

laser

The HPC toolbox: fused matrix multiplication, convolution, data-parallel strided tensor primitives, OpenMP facilities, SIMD, JIT Assembler, CPU detection, state-of-the-art vectorized BLAS for floats and integers (by mratsim)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
array laser
5 6
188 261
- 2.3%
6.9 3.6
4 months ago 4 months ago
C++ Nim
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

array

Posts with mentions or reviews of array. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-27.
  • Einsum in 40 Lines of Python
    6 projects | news.ycombinator.com | 27 Apr 2024
    I wrote a library in C++ (I know, probably a non-starter for most reading this) that I think does most of what you want, as well as some other requests in this thread (generalized to more than just multiply-add): https://github.com/dsharlet/array?tab=readme-ov-file#einstei....

    A matrix multiply written with this looks like this:

        enum { i = 2, j = 0, k = 1 };
  • Benchmarking 20 programming languages on N-queens and matrix multiplication
    15 projects | news.ycombinator.com | 2 Jan 2024
    I should have mentioned somewhere, I disabled threading for OpenBLAS, so it is comparing one thread to one thread. Parallelism would be easy to add, but I tend to want the thread parallelism outside code like this anyways.

    As for the inner loop not being well optimized... the disassembly looks like the same basic thing as OpenBLAS. There's disassembly in the comments of that file to show what code it generates, I'd love to know what you think is lacking! The only difference between the one I linked and this is prefetching and outer loop ordering: https://github.com/dsharlet/array/blob/master/examples/linea...

  • A basic introduction to NumPy's einsum
    13 projects | news.ycombinator.com | 9 Apr 2022
    If you are looking for something like this in C++, here's my attempt at implementing it: https://github.com/dsharlet/array#einstein-reductions

    It doesn't do any automatic optimization of the loops like some of the projects linked in this thread, but, it provides all the tools needed for humans to express the code in a way that a good compiler can turn it into really good code.

laser

Posts with mentions or reviews of laser. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-23.
  • From slow to SIMD: A Go optimization story
    10 projects | news.ycombinator.com | 23 Jan 2024
    It depends.

    You need 2~3 accumulators to saturate instruction-level parallelism with a parallel sum reduction. But the compiler won't do it because it only creates those when the operation is associative, i.e. (a+b)+c = a+(b+c), which is true for integers but not for floats.

    There is an escape hatch in -ffast-math.

    I have extensive benches on this here: https://github.com/mratsim/laser/blob/master/benchmarks%2Ffp...

  • Benchmarking 20 programming languages on N-queens and matrix multiplication
    15 projects | news.ycombinator.com | 2 Jan 2024
    Ah,

    It was from an older implementation that wasn't compatible with Nim v2. I've commented it out.

    If you pull again it should work.

    > Anyway the reason for your competitive performance is likely that you are benchmarking with very small matrices. OpenBLAS spends some time preprocessing the tiles which doesn't really pay off until they become really huge.

    I don't get why you think it's impossible to reach BLAS speed. The matrix sizes are configured here: https://github.com/mratsim/laser/blob/master/benchmarks/gemm...

    It defaults to 1920x1920 * 1920x1920. Note, if you activate the benchmarks versus PyTorch Glow, in the past it didn't support non-multiple of 16 or something, not sure today.

    Packing is done here: https://github.com/mratsim/laser/blob/master/laser/primitive...

    And it also support pre-packing which is useful to reimplement batch_matmul like what CuBLAS provides and is quite useful for convolution via matmul.

  • Why does working with a transposed tensor not make the following operations less performant?
    2 projects | /r/MLQuestions | 19 Jun 2021
    For convolutions: - https://github.com/numforge/laser/blob/e23b5d63/research/convolution_optimisation_resources.md
  • Improve performance with SIMD intrinsics
    1 project | /r/C_Programming | 25 Feb 2021
    You can train yourself on matrix transposition first. It's straightforward to get 3x speedup between naive transposition and double loop tiling, see: https://github.com/numforge/laser/blob/d1e6ae6/benchmarks/transpose/transpose_bench.nim#L238

What are some alternatives?

When comparing array and laser you can also consider the following projects:

optimizing-the-memory-layout-of-std-tuple - Optimizing the memory layout of std::tuple

Arraymancer - A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends

NumPy - The fundamental package for scientific computing with Python.

nim-sos - Nim wrapper for Sandia-OpenSHMEM

cadabra2 - A field-theory motivated approach to computer algebra.

ParallelReductionsBenchmark - Thrust, CUB, TBB, AVX2, CUDA, OpenCL, OpenMP, SyCL - all it takes to sum a lot of numbers fast!

alphafold2 - To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released

analisis-numerico-computo-cientifico - Análisis numérico y cómputo científico

Einsum.jl - Einstein summation notation in Julia

blis - BLAS-like Library Instantiation Software Framework

c-examples - Example C code

JohnTheRipper - John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs [Moved to: https://github.com/openwall/john]