Eigen
OpenBLAS
Eigen | OpenBLAS | |
---|---|---|
- | 22 | |
- | 6,319 | |
- | 1.2% | |
- | 9.8 | |
over 8 years ago | 15 days ago | |
C | ||
- | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Eigen
We haven't tracked posts mentioning Eigen yet.
Tracking mentions began in Dec 2020.
OpenBLAS
-
LLaMA Now Goes Faster on CPUs
The Fortran implementation is just a reference implementation. The goal of reference BLAS [0] is to provide relatively simple and easy to understand implementations which demonstrate the interface and are intended to give correct results to test against. Perhaps an exceptional Fortran compiler which doesn't yet exist could generate code which rivals hand (or automatically) tuned optimized BLAS libraries like OpenBLAS [1], MKL [2], ATLAS [3], and those based on BLIS [4], but in practice this is not observed.
Justine observed that the threading model for LLaMA makes it impractical to integrate one of these optimized BLAS libraries, so she wrote her own hand-tuned implementations following the same principles they use.
[0] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprogra...
[1] https://github.com/OpenMathLib/OpenBLAS
[2] https://www.intel.com/content/www/us/en/developer/tools/onea...
[3] https://en.wikipedia.org/wiki/Automatically_Tuned_Linear_Alg...
[4]https://en.wikipedia.org/wiki/BLIS_(software)
- Assume I'm an idiot - oogabooga LLaMa.cpp??!
-
Learn x86-64 assembly by writing a GUI from scratch
Yeah. I'm going to be helping to work on expanding CI for OpenBlas and have been diving into this stuff lately. See the discussion in this closed OpenBlas issue gh-1968 [0] for instance. OpenBlas's Skylake kernels do rely on intrinsics [1] for compilers that support them, but there's a wide range of architectures to support, and when hand-tuned assembly kernels work better, that's what are used. For example, [2].
[0] https://github.com/xianyi/OpenBLAS/issues/1968
[1] https://github.com/xianyi/OpenBLAS/blob/develop/kernel/x86_6...
[2] https://github.com/xianyi/OpenBLAS/blob/23693f09a26ffd8b60eb...
-
AI’s compute fragmentation: what matrix multiplication teaches us
We'll have to wait until part 2 to see what they are actually proposing, but they are trying to solve a real problem. To get a sense of things check out the handwritten assembly kernels in OpenBlas [0]. Note the level of granularity. There are micro-optimized implementations for specific chipsets.
If progress in ML will be aided by a proliferation of hyper-specialized hardware, then there really is a scalability issue around developing optimized matmul routines for each specialized chip. To be able to develop a custom ASIC for a particular application and then easily generate the necessary matrix libraries without having to write hand-crafted assembly for each specific case seems like it could be very powerful.
[0] https://github.com/xianyi/OpenBLAS/tree/develop/kernel
-
Trying downloading BCML
libraries mkl_rt not found in ['C:\python\lib', 'C:\', 'C:\python\libs'] ``` Install this and try again. Might need to reboot, never know with Windows https://www.openblas.net/
-
The Bitter Truth: Python 3.11 vs Cython vs C++ Performance for Simulations
There isn't any fortran code in the repo there itself but numpy itself can be linked with several numeric libraries. If you look through the wheels for numpy available on pypi, all the latest ones are packaged with OpenBLAS which uses Fortran quite a bit: https://github.com/xianyi/OpenBLAS
- Optimizing compilers reload vector constants needlessly
-
Just a quick question, can a programming language be as fast as C++ and efficient with as simple syntax like Python?
Sure - write functions in another language, export C bindings, and then call those functions from Python. An example is NumPy - a lot of its linear algebra functions are implemented in C and Fortran.
- OpenBLAS - optimized BLAS library based on GotoBLAS2 1.13 BSD version
-
How to include external libraries?
Read the official docs yet?
What are some alternatives?
GLM - OpenGL Mathematics (GLM)
blaze
cblas - Netlib's C BLAS wrapper: http://www.netlib.org/blas/#_cblas
ceres-solver - A large scale non-linear optimization library
Boost.Multiprecision - Boost.Multiprecision
CGal - The public CGAL repository, see the README below
ExprTK - C++ Mathematical Expression Parsing And Evaluation Library https://www.partow.net/programming/exprtk/index.html