blis
related_post_gen
blis | related_post_gen | |
---|---|---|
17 | 15 | |
2,091 | 274 | |
3.5% | - | |
7.0 | 9.9 | |
7 days ago | about 2 months ago | |
C | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
blis
-
Faer-rs: Linear algebra foundation for the Rust programming language
BLIS is an interesting new direction in that regard: https://github.com/flame/blis
>The BLAS-like Library Instantiation Software (BLIS) framework is a new infrastructure for rapidly instantiating Basic Linear Algebra Subprograms (BLAS) functionality. Its fundamental innovation is that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed and optimized in terms of very simple kernels.
-
Optimize sgemm on RISC-V platform
There is a recent update to the blis alternative to BLAS that includes a number of RISC-V performance optimizations.
https://github.com/flame/blis/pull/737
-
BLIS: Portable basis for high-performance BLAS-like linear algebra libs
https://github.com/flame/blis/blob/master/docs/Performance.m...
It seems that the selling point is that BLIS does multi-core quite well. I am especially impressed that it does as well as the highly optimized Intel's MKL on Intel's CPUs.
I do not see the selling point of BLIS-specific APIs, though. The whole point of having an open BLAS API standard is that numerical libraries should be drop-in replaceable, so when a new library (such as BLIS here) comes along, one could just re-link the library and reap the performance gain immediately.
What is interesting is that numerical algebra work, by nature, is mostly embarrassingly parallel, so it should not be too difficult to write multi-core implementations. And yet, BLIS here performs so much better than some other industry-leading implementations on multi-core configurations. So the question is not why BLIS does so well; the question is why some other implementations do so poorly.
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
First we can use Laser, which was my initial BLAS experiment in 2019. At the time in particular, OpenBLAS didn't properly use the AVX512 VPUs. (See thread in BLIS https://github.com/flame/blis/issues/352 ), It has made progress since then, still, on my current laptop perf is in the same range
Reproduction:
-
The Art of High Performance Computing
https://github.com/flame/blis/
Field et al, recent winners of the James H. Wilkinson Prize for Numerical Software.
Field and Goto both worked with Robert van de Geijn. Lots of TACC interaction in that broader team.
-
[D] Which BLAS library to choose for apple silicon?
BLIS is fine too~ https://github.com/flame/blis
-
Column Vectors vs. Row Vectors
Here's BLIS's object API:
https://github.com/flame/blis/blob/master/docs/BLISObjectAPI...
Searching "object" in BLIS's README (https://github.com/flame/blis) to see what they think of it:
"Objects are relatively lightweight structs and passed by address, which helps tame function calling overhead."
"This is API abstracts away properties of vectors and matrices within obj_t structs that can be queried with accessor functions. Many developers and experts prefer this API over the typed API."
In my opinion, this API is a strict improvement over BLAS. I do not think there is any reason to prefer the old BLAS-style API over an improvement like this.
Regarding your own experience, it's great that using BLAS proved to be a valuable learning experience for you. But your argument that the BLAS API is somehow uniquely helpful in terms of learning how to program numerical algorithms efficiently, or that it will help you avoid performance problems, is not true. It is possible to replace the BLAS API with a more modern and intuitive API with the same benefits. To be clear, the benefits here are direct memory management and control of striding and matrix layout, which create opportunities for optimization. There is nothing unique about BLAS in this regard---it's possible to learn these lessons using any of the other listed options if you're paying attention and being systematic.
- BLIS: Portable software framework for high-performance linear algebra
-
Small Neural networks in Julia 5x faster than PyTorch
The article asks "Which Micro-optimizations matter for BLAS3?", implying small dimensions, but doesn't actually tell me. The problem is well-studied, depending on what you consider "small". The most important thing is to avoid the packing step below an appropriate threshold. Implementations include libxsmm, blasfeo, and the "sup" version in blis (with papers on libxsmm and blasfeo). Eigen might also be relevant.
https://libxsmm.readthedocs.io/
https://blasfeo.syscop.de/
https://github.com/flame/blis
- Eigen: A C++ template library for linear algebra
related_post_gen
-
Speed up your code: don't pass structs bigger than 16 bytes on AMD64
Looks like the HO means hand optimized, with special datastructures for this benchmark.
see: https://github.com/jinyus/related_post_gen/#user-content-fn-...
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
There is one for data processing here: https://github.com/jinyus/related_post_gen
-
The Neat Programming Language
Is it ready for benchmarking? D currently sits at the top of https://github.com/jinyus/related_post_gen and it would be interesting to see how neat stacks up.
-
Murder is a pixel art ECS game engine in C#
[2] https://github.com/jinyus/related_post_gen#multicore-results
-
Jaq – A jq clone focused on correctness, speed, and simplicity
I think my benchmark[1] would be a great test for this. The jq[2] version takes 50s on my machine.
[1] : https://github.com/jinyus/related_post_gen
[2]: https://github.com/jinyus/related_post_gen/blob/main/jq/rela...
-
Gleam vs Erlang vs Go vs Zig vs Rust for data processing
I added gleam to my data processing benchmark and the performance is less than stellar...so I hope someone here can make suggestions to improve it.
- jinyus/related_post_gen: Data Processing benchmark featuring Rust, Go, Swift, Zig, Julia etc.
-
Ask HN: What's the big deal with Go (Golang)?
Easy concurrency.
ps: I wrote a data processing benchmark[1] and go is currently leading the charts. I ported it to c++ but it's not performing as expected. Take a look if you have the time.
[1]: https://github.com/jinyus/related_post_gen
- Julia leads Rust,Zig,Go and Java in data processing benchmark
- Julia Ranks First in Data Processing Microbenchmark
What are some alternatives?
tiny-cuda-nn - Lightning fast C++/CUDA neural network framework
uiua - A stack-based array programming language
vectorflow
pspy - Monitor linux processes without root permissions
sundials - Official development repository for SUNDIALS - a SUite of Nonlinear and DIfferential/ALgebraic equation Solvers. Pull requests are welcome for bug fixes and minor changes.
ivy - ivy, an APL-like calculator
DirectXMath - DirectXMath is an all inline SIMD C++ linear algebra library for use in games and graphics apps
BQN - An APL-like programming language. Self-hosted!
xtensor - C++ tensors with broadcasting and lazy computing
cognate - A human readable quasi-concatenative programming language
how-to-optimize-gemm
Saxon-HE - Saxon-HE open source repository