laser

The HPC toolbox: fused matrix multiplication, convolution, data-parallel strided tensor primitives, OpenMP facilities, SIMD, JIT Assembler, CPU detection, state-of-the-art vectorized BLAS for floats and integers (by mratsim)

Laser Alternatives

Similar projects and alternatives to laser

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better laser alternative or higher similarity.

laser reviews and mentions

Posts with mentions or reviews of laser. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-23.
  • From slow to SIMD: A Go optimization story
    10 projects | news.ycombinator.com | 23 Jan 2024
    It depends.

    You need 2~3 accumulators to saturate instruction-level parallelism with a parallel sum reduction. But the compiler won't do it because it only creates those when the operation is associative, i.e. (a+b)+c = a+(b+c), which is true for integers but not for floats.

    There is an escape hatch in -ffast-math.

    I have extensive benches on this here: https://github.com/mratsim/laser/blob/master/benchmarks%2Ffp...

  • Benchmarking 20 programming languages on N-queens and matrix multiplication
    15 projects | news.ycombinator.com | 2 Jan 2024
    Ah,

    It was from an older implementation that wasn't compatible with Nim v2. I've commented it out.

    If you pull again it should work.

    > Anyway the reason for your competitive performance is likely that you are benchmarking with very small matrices. OpenBLAS spends some time preprocessing the tiles which doesn't really pay off until they become really huge.

    I don't get why you think it's impossible to reach BLAS speed. The matrix sizes are configured here: https://github.com/mratsim/laser/blob/master/benchmarks/gemm...

    It defaults to 1920x1920 * 1920x1920. Note, if you activate the benchmarks versus PyTorch Glow, in the past it didn't support non-multiple of 16 or something, not sure today.

    Packing is done here: https://github.com/mratsim/laser/blob/master/laser/primitive...

    And it also support pre-packing which is useful to reimplement batch_matmul like what CuBLAS provides and is quite useful for convolution via matmul.

  • Why does working with a transposed tensor not make the following operations less performant?
    2 projects | /r/MLQuestions | 19 Jun 2021
    For convolutions: - https://github.com/numforge/laser/blob/e23b5d63/research/convolution_optimisation_resources.md
  • Improve performance with SIMD intrinsics
    1 project | /r/C_Programming | 25 Feb 2021
    You can train yourself on matrix transposition first. It's straightforward to get 3x speedup between naive transposition and double loop tiling, see: https://github.com/numforge/laser/blob/d1e6ae6/benchmarks/transpose/transpose_bench.nim#L238
  • A note from our sponsor - SaaSHub
    www.saashub.com | 2 May 2024
    SaaSHub helps you find the best software and product alternatives Learn more →

Stats

Basic laser repo stats
6
261
3.6
4 months ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com