plb2 VS laser

Compare plb2 vs laser and see what are their differences.

plb2

A programming language benchmark (by attractivechaos)

laser

The HPC toolbox: fused matrix multiplication, convolution, data-parallel strided tensor primitives, OpenMP facilities, SIMD, JIT Assembler, CPU detection, state-of-the-art vectorized BLAS for floats and integers (by mratsim)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
plb2 laser
7 6
238 261
- 1.5%
9.4 3.6
20 days ago 4 months ago
C Nim
Creative Commons Zero v1.0 Universal Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

plb2

Posts with mentions or reviews of plb2. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-12.
  • Byte-Sized Swift: Building Tiny Games for the Playdate
    3 projects | news.ycombinator.com | 12 Mar 2024
    https://github.com/attractivechaos/plb2 - limited but broad comparison across a large number of languages. Swift and Nim both compare favourably to C.
  • The One Billion Row Challenge in Go: from 1m45s to 4s in nine solutions
    15 projects | news.ycombinator.com | 2 Mar 2024
    https://github.com/attractivechaos/plb2/blob/master/README.m...

    Synthetic benchmarks aside, I think as far as average (spring boots of the world) code goes, Go beats Java almost every time, often in less lines than the usual pom.xml

  • Python 3.13 Gets a JIT
    11 projects | news.ycombinator.com | 9 Jan 2024
    I wouldn't be so enthusiastic. Look at other languages that have JIT now: Ruby and PHP. After years of efforts, they are still an order of magnitude slower than V8 and even PyPy [1]. It seems to me that you need to design a JIT implementation from ground up to get good performance – V8, Dart, LuaJIT and PyPy are like this; if you start with a pure interpreter, it may be difficult to speed it up later.

    [1] https://github.com/attractivechaos/plb2

  • Benchmarking 20 programming languages on N-queens and matrix multiplication
    15 projects | news.ycombinator.com | 2 Jan 2024
    A curious thing about Swift: after https://github.com/attractivechaos/plb2/pull/23, the matrix multiplication example is comparable to C and Rust. However, I don’t see a way to idiomatically optimise the sudoku example, whose main overhead is allocating several arrays each time solve() is called. Apparently, in Swift there is no such thing as static array allocation. That’s very unfortunate.

laser

Posts with mentions or reviews of laser. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-23.
  • From slow to SIMD: A Go optimization story
    10 projects | news.ycombinator.com | 23 Jan 2024
    It depends.

    You need 2~3 accumulators to saturate instruction-level parallelism with a parallel sum reduction. But the compiler won't do it because it only creates those when the operation is associative, i.e. (a+b)+c = a+(b+c), which is true for integers but not for floats.

    There is an escape hatch in -ffast-math.

    I have extensive benches on this here: https://github.com/mratsim/laser/blob/master/benchmarks%2Ffp...

  • Benchmarking 20 programming languages on N-queens and matrix multiplication
    15 projects | news.ycombinator.com | 2 Jan 2024
    Ah,

    It was from an older implementation that wasn't compatible with Nim v2. I've commented it out.

    If you pull again it should work.

    > Anyway the reason for your competitive performance is likely that you are benchmarking with very small matrices. OpenBLAS spends some time preprocessing the tiles which doesn't really pay off until they become really huge.

    I don't get why you think it's impossible to reach BLAS speed. The matrix sizes are configured here: https://github.com/mratsim/laser/blob/master/benchmarks/gemm...

    It defaults to 1920x1920 * 1920x1920. Note, if you activate the benchmarks versus PyTorch Glow, in the past it didn't support non-multiple of 16 or something, not sure today.

    Packing is done here: https://github.com/mratsim/laser/blob/master/laser/primitive...

    And it also support pre-packing which is useful to reimplement batch_matmul like what CuBLAS provides and is quite useful for convolution via matmul.

  • Why does working with a transposed tensor not make the following operations less performant?
    2 projects | /r/MLQuestions | 19 Jun 2021
    For convolutions: - https://github.com/numforge/laser/blob/e23b5d63/research/convolution_optimisation_resources.md
  • Improve performance with SIMD intrinsics
    1 project | /r/C_Programming | 25 Feb 2021
    You can train yourself on matrix transposition first. It's straightforward to get 3x speedup between naive transposition and double loop tiling, see: https://github.com/numforge/laser/blob/d1e6ae6/benchmarks/transpose/transpose_bench.nim#L238

What are some alternatives?

When comparing plb2 and laser you can also consider the following projects:

c-examples - Example C code

Arraymancer - A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends

weave - A state-of-the-art multithreading runtime: message-passing based, fast, scalable, ultra-low overhead

nim-sos - Nim wrapper for Sandia-OpenSHMEM

tarantool - Get your data in RAM. Get compute close to data. Enjoy the performance.

ParallelReductionsBenchmark - Thrust, CUB, TBB, AVX2, CUDA, OpenCL, OpenMP, SyCL - all it takes to sum a lot of numbers fast!

blis - BLAS-like Library Instantiation Software Framework

analisis-numerico-computo-cientifico - Análisis numérico y cómputo científico

related_post_gen - Data Processing benchmark featuring Rust, Go, Swift, Zig, Julia etc.

1brc - 1BRC in .NET among fastest on Linux

JohnTheRipper - John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs [Moved to: https://github.com/openwall/john]