maxas
Assembler for NVIDIA Maxwell architecture (by NervanaSystems)
blislab
BLISlab: A Sandbox for Optimizing GEMM (by flame)
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
maxas
Posts with mentions or reviews of maxas.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-15.
- With LLVM and MLIR, is manual cuda optimizing still important?
-
How to make CUDA libraries more performant?
cuDNN is already very optimized but if you want to read on optimizing, here you go (Maxwell specifix) https://github.com/NervanaSystems/maxas/wiki/SGEMM, there is an accompanying paper or read Nvidia Cutlass.
-
Image convolution optimisation strategies.
As explained in https://github.com/NervanaSystems/maxas/wiki/SGEMM you need to do the same on GPUs.
blislab
Posts with mentions or reviews of blislab.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-07-27.
-
Image convolution optimisation strategies.
Efficient matrix multiplications or convolutions on CPU will use layered tiling to optimize registers, L1, L2 and TLB, L3 cache (if it exist). This improve speed by over 150x vs naive triple for-loop matrix multiplication and the same thing applies to convolution. See overview https://www.cs.utexas.edu/users/flame/laff/pfhp/week3-goto.html and actual exercises https://github.com/flame/blislab
What are some alternatives?
When comparing maxas and blislab you can also consider the following projects:
Halide - a language for fast, portable data-parallel computation
how-to-optimize-gemm
triton - Development repository for the Triton language and compiler
Image-Convolutaion-OpenCL
llama2.c - Llama 2 Everywhere (L2E)
cutlass - CUDA Templates for Linear Algebra Subroutines
blis - BLAS-like Library Instantiation Software Framework