plb2
laser
plb2 | laser | |
---|---|---|
7 | 6 | |
238 | 261 | |
- | 1.5% | |
9.4 | 3.6 | |
20 days ago | 4 months ago | |
C | Nim | |
Creative Commons Zero v1.0 Universal | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
plb2
-
Byte-Sized Swift: Building Tiny Games for the Playdate
https://github.com/attractivechaos/plb2 - limited but broad comparison across a large number of languages. Swift and Nim both compare favourably to C.
-
The One Billion Row Challenge in Go: from 1m45s to 4s in nine solutions
https://github.com/attractivechaos/plb2/blob/master/README.m...
Synthetic benchmarks aside, I think as far as average (spring boots of the world) code goes, Go beats Java almost every time, often in less lines than the usual pom.xml
-
Python 3.13 Gets a JIT
I wouldn't be so enthusiastic. Look at other languages that have JIT now: Ruby and PHP. After years of efforts, they are still an order of magnitude slower than V8 and even PyPy [1]. It seems to me that you need to design a JIT implementation from ground up to get good performance – V8, Dart, LuaJIT and PyPy are like this; if you start with a pure interpreter, it may be difficult to speed it up later.
[1] https://github.com/attractivechaos/plb2
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
A curious thing about Swift: after https://github.com/attractivechaos/plb2/pull/23, the matrix multiplication example is comparable to C and Rust. However, I don’t see a way to idiomatically optimise the sudoku example, whose main overhead is allocating several arrays each time solve() is called. Apparently, in Swift there is no such thing as static array allocation. That’s very unfortunate.
laser
-
From slow to SIMD: A Go optimization story
It depends.
You need 2~3 accumulators to saturate instruction-level parallelism with a parallel sum reduction. But the compiler won't do it because it only creates those when the operation is associative, i.e. (a+b)+c = a+(b+c), which is true for integers but not for floats.
There is an escape hatch in -ffast-math.
I have extensive benches on this here: https://github.com/mratsim/laser/blob/master/benchmarks%2Ffp...
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
Ah,
It was from an older implementation that wasn't compatible with Nim v2. I've commented it out.
If you pull again it should work.
> Anyway the reason for your competitive performance is likely that you are benchmarking with very small matrices. OpenBLAS spends some time preprocessing the tiles which doesn't really pay off until they become really huge.
I don't get why you think it's impossible to reach BLAS speed. The matrix sizes are configured here: https://github.com/mratsim/laser/blob/master/benchmarks/gemm...
It defaults to 1920x1920 * 1920x1920. Note, if you activate the benchmarks versus PyTorch Glow, in the past it didn't support non-multiple of 16 or something, not sure today.
Packing is done here: https://github.com/mratsim/laser/blob/master/laser/primitive...
And it also support pre-packing which is useful to reimplement batch_matmul like what CuBLAS provides and is quite useful for convolution via matmul.
-
Why does working with a transposed tensor not make the following operations less performant?
For convolutions: - https://github.com/numforge/laser/blob/e23b5d63/research/convolution_optimisation_resources.md
-
Improve performance with SIMD intrinsics
You can train yourself on matrix transposition first. It's straightforward to get 3x speedup between naive transposition and double loop tiling, see: https://github.com/numforge/laser/blob/d1e6ae6/benchmarks/transpose/transpose_bench.nim#L238
What are some alternatives?
c-examples - Example C code
Arraymancer - A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
weave - A state-of-the-art multithreading runtime: message-passing based, fast, scalable, ultra-low overhead
nim-sos - Nim wrapper for Sandia-OpenSHMEM
tarantool - Get your data in RAM. Get compute close to data. Enjoy the performance.
ParallelReductionsBenchmark - Thrust, CUB, TBB, AVX2, CUDA, OpenCL, OpenMP, SyCL - all it takes to sum a lot of numbers fast!
blis - BLAS-like Library Instantiation Software Framework
analisis-numerico-computo-cientifico - Análisis numérico y cómputo científico
related_post_gen - Data Processing benchmark featuring Rust, Go, Swift, Zig, Julia etc.
1brc - 1BRC in .NET among fastest on Linux
JohnTheRipper - John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs [Moved to: https://github.com/openwall/john]