array
laser
Our great sponsors
array | laser | |
---|---|---|
5 | 6 | |
188 | 261 | |
- | 2.3% | |
6.9 | 3.6 | |
4 months ago | 4 months ago | |
C++ | Nim | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
array
-
Einsum in 40 Lines of Python
I wrote a library in C++ (I know, probably a non-starter for most reading this) that I think does most of what you want, as well as some other requests in this thread (generalized to more than just multiply-add): https://github.com/dsharlet/array?tab=readme-ov-file#einstei....
A matrix multiply written with this looks like this:
enum { i = 2, j = 0, k = 1 };
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
I should have mentioned somewhere, I disabled threading for OpenBLAS, so it is comparing one thread to one thread. Parallelism would be easy to add, but I tend to want the thread parallelism outside code like this anyways.
As for the inner loop not being well optimized... the disassembly looks like the same basic thing as OpenBLAS. There's disassembly in the comments of that file to show what code it generates, I'd love to know what you think is lacking! The only difference between the one I linked and this is prefetching and outer loop ordering: https://github.com/dsharlet/array/blob/master/examples/linea...
-
A basic introduction to NumPy's einsum
If you are looking for something like this in C++, here's my attempt at implementing it: https://github.com/dsharlet/array#einstein-reductions
It doesn't do any automatic optimization of the loops like some of the projects linked in this thread, but, it provides all the tools needed for humans to express the code in a way that a good compiler can turn it into really good code.
laser
-
From slow to SIMD: A Go optimization story
It depends.
You need 2~3 accumulators to saturate instruction-level parallelism with a parallel sum reduction. But the compiler won't do it because it only creates those when the operation is associative, i.e. (a+b)+c = a+(b+c), which is true for integers but not for floats.
There is an escape hatch in -ffast-math.
I have extensive benches on this here: https://github.com/mratsim/laser/blob/master/benchmarks%2Ffp...
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
Ah,
It was from an older implementation that wasn't compatible with Nim v2. I've commented it out.
If you pull again it should work.
> Anyway the reason for your competitive performance is likely that you are benchmarking with very small matrices. OpenBLAS spends some time preprocessing the tiles which doesn't really pay off until they become really huge.
I don't get why you think it's impossible to reach BLAS speed. The matrix sizes are configured here: https://github.com/mratsim/laser/blob/master/benchmarks/gemm...
It defaults to 1920x1920 * 1920x1920. Note, if you activate the benchmarks versus PyTorch Glow, in the past it didn't support non-multiple of 16 or something, not sure today.
Packing is done here: https://github.com/mratsim/laser/blob/master/laser/primitive...
And it also support pre-packing which is useful to reimplement batch_matmul like what CuBLAS provides and is quite useful for convolution via matmul.
-
Why does working with a transposed tensor not make the following operations less performant?
For convolutions: - https://github.com/numforge/laser/blob/e23b5d63/research/convolution_optimisation_resources.md
-
Improve performance with SIMD intrinsics
You can train yourself on matrix transposition first. It's straightforward to get 3x speedup between naive transposition and double loop tiling, see: https://github.com/numforge/laser/blob/d1e6ae6/benchmarks/transpose/transpose_bench.nim#L238
What are some alternatives?
optimizing-the-memory-layout-of-std-tuple - Optimizing the memory layout of std::tuple
Arraymancer - A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
NumPy - The fundamental package for scientific computing with Python.
nim-sos - Nim wrapper for Sandia-OpenSHMEM
cadabra2 - A field-theory motivated approach to computer algebra.
ParallelReductionsBenchmark - Thrust, CUB, TBB, AVX2, CUDA, OpenCL, OpenMP, SyCL - all it takes to sum a lot of numbers fast!
alphafold2 - To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released
analisis-numerico-computo-cientifico - Análisis numérico y cómputo científico
Einsum.jl - Einstein summation notation in Julia
blis - BLAS-like Library Instantiation Software Framework
c-examples - Example C code
JohnTheRipper - John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs [Moved to: https://github.com/openwall/john]