weave
laser
weave | laser | |
---|---|---|
7 | 6 | |
524 | 261 | |
- | 1.5% | |
3.0 | 3.6 | |
5 months ago | 4 months ago | |
Nim | Nim | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
weave
- The GIL can now be disabled in Python's main branch
-
Maybe Everything Is a Coroutine
GPU drivers provide an event system:
- Cuda: https://github.com/mratsim/weave/issues/133
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
```
Note: the Theoretical peak limit is hardcoded and used my previous machine i9-9980XE.
It maybe that your BLAS library is not named libopenblas.so, you can change that here: https://github.com/mratsim/laser/blob/master/benchmarks/thir...
Implementation is in this folder: https://github.com/mratsim/laser/tree/master/laser/primitive...
in particular, tiling, cache and register optimization: https://github.com/mratsim/laser/blob/master/laser/primitive...
AVX512 code generator: https://github.com/mratsim/laser/blob/master/laser/primitive...
And generic Scalar/SSE/AVX/AVX2/AVX512 microkernel generator (this is Nim macros to generate code at compile-time): https://github.com/mratsim/laser/blob/master/laser/primitive...
I'll come back later with details on how to use my custom HPC threadpool Weave instead of OpenMP (https://github.com/mratsim/weave/tree/master/benchmarks/matm...)
-
Nim vs Rust Benchmarks
In my benchmarks, Nim is faster than Rust:
- multithreading runtime (i.e Rayon vs Weave https://github.com/mratsim/weave)
- Cryptography: https://hackmd.io/@gnark/eccbench#Pairing
- Scientific computing / matrix multiplication: https://github.com/bluss/matrixmultiply/issues/34#issuecomme...
There is no inherent reason why a Nim program would be slower than Rust.
-
Aren't green threads just better than async/await?
If you're interested into diving into this I have reviewed solutions to cactus stacks / split stacks here https://github.com/mratsim/weave/blob/master/weave/memory/multithreaded_memory_management.md
-
Nim 2.0 – Thoughts
[4] https://github.com/mratsim/weave
laser
-
From slow to SIMD: A Go optimization story
It depends.
You need 2~3 accumulators to saturate instruction-level parallelism with a parallel sum reduction. But the compiler won't do it because it only creates those when the operation is associative, i.e. (a+b)+c = a+(b+c), which is true for integers but not for floats.
There is an escape hatch in -ffast-math.
I have extensive benches on this here: https://github.com/mratsim/laser/blob/master/benchmarks%2Ffp...
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
Ah,
It was from an older implementation that wasn't compatible with Nim v2. I've commented it out.
If you pull again it should work.
> Anyway the reason for your competitive performance is likely that you are benchmarking with very small matrices. OpenBLAS spends some time preprocessing the tiles which doesn't really pay off until they become really huge.
I don't get why you think it's impossible to reach BLAS speed. The matrix sizes are configured here: https://github.com/mratsim/laser/blob/master/benchmarks/gemm...
It defaults to 1920x1920 * 1920x1920. Note, if you activate the benchmarks versus PyTorch Glow, in the past it didn't support non-multiple of 16 or something, not sure today.
Packing is done here: https://github.com/mratsim/laser/blob/master/laser/primitive...
And it also support pre-packing which is useful to reimplement batch_matmul like what CuBLAS provides and is quite useful for convolution via matmul.
-
Why does working with a transposed tensor not make the following operations less performant?
For convolutions: - https://github.com/numforge/laser/blob/e23b5d63/research/convolution_optimisation_resources.md
-
Improve performance with SIMD intrinsics
You can train yourself on matrix transposition first. It's straightforward to get 3x speedup between naive transposition and double loop tiling, see: https://github.com/numforge/laser/blob/d1e6ae6/benchmarks/transpose/transpose_bench.nim#L238
What are some alternatives?
eioio - Effects-based direct-style IO for multicore OCaml
Arraymancer - A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
httpbeast - A highly performant, multi-threaded HTTP 1.1 server written in Nim.
nim-sos - Nim wrapper for Sandia-OpenSHMEM
matrixmultiply - General matrix multiplication of f32 and f64 matrices in Rust. Supports matrices with general strides.
ParallelReductionsBenchmark - Thrust, CUB, TBB, AVX2, CUDA, OpenCL, OpenMP, SyCL - all it takes to sum a lot of numbers fast!
Edith - Electronic Design in Swithft
analisis-numerico-computo-cientifico - Análisis numérico y cómputo científico
ocaml-multicore - Multicore OCaml
blis - BLAS-like Library Instantiation Software Framework
cosmopolitan - build-once run-anywhere c library
JohnTheRipper - John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs [Moved to: https://github.com/openwall/john]