OpenBLAS
tensorflow
OpenBLAS | tensorflow | |
---|---|---|
22 | 223 | |
5,983 | 182,575 | |
1.6% | 0.5% | |
9.8 | 10.0 | |
about 18 hours ago | 4 days ago | |
C | C++ | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OpenBLAS
-
LLaMA Now Goes Faster on CPUs
The Fortran implementation is just a reference implementation. The goal of reference BLAS [0] is to provide relatively simple and easy to understand implementations which demonstrate the interface and are intended to give correct results to test against. Perhaps an exceptional Fortran compiler which doesn't yet exist could generate code which rivals hand (or automatically) tuned optimized BLAS libraries like OpenBLAS [1], MKL [2], ATLAS [3], and those based on BLIS [4], but in practice this is not observed.
Justine observed that the threading model for LLaMA makes it impractical to integrate one of these optimized BLAS libraries, so she wrote her own hand-tuned implementations following the same principles they use.
[0] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprogra...
[1] https://github.com/OpenMathLib/OpenBLAS
[2] https://www.intel.com/content/www/us/en/developer/tools/onea...
[3] https://en.wikipedia.org/wiki/Automatically_Tuned_Linear_Alg...
[4]https://en.wikipedia.org/wiki/BLIS_(software)
- Assume I'm an idiot - oogabooga LLaMa.cpp??!
-
Learn x86-64 assembly by writing a GUI from scratch
Yeah. I'm going to be helping to work on expanding CI for OpenBlas and have been diving into this stuff lately. See the discussion in this closed OpenBlas issue gh-1968 [0] for instance. OpenBlas's Skylake kernels do rely on intrinsics [1] for compilers that support them, but there's a wide range of architectures to support, and when hand-tuned assembly kernels work better, that's what are used. For example, [2].
[0] https://github.com/xianyi/OpenBLAS/issues/1968
[1] https://github.com/xianyi/OpenBLAS/blob/develop/kernel/x86_6...
[2] https://github.com/xianyi/OpenBLAS/blob/23693f09a26ffd8b60eb...
-
AI’s compute fragmentation: what matrix multiplication teaches us
We'll have to wait until part 2 to see what they are actually proposing, but they are trying to solve a real problem. To get a sense of things check out the handwritten assembly kernels in OpenBlas [0]. Note the level of granularity. There are micro-optimized implementations for specific chipsets.
If progress in ML will be aided by a proliferation of hyper-specialized hardware, then there really is a scalability issue around developing optimized matmul routines for each specialized chip. To be able to develop a custom ASIC for a particular application and then easily generate the necessary matrix libraries without having to write hand-crafted assembly for each specific case seems like it could be very powerful.
[0] https://github.com/xianyi/OpenBLAS/tree/develop/kernel
-
Trying downloading BCML
libraries mkl_rt not found in ['C:\python\lib', 'C:\', 'C:\python\libs'] ``` Install this and try again. Might need to reboot, never know with Windows https://www.openblas.net/
-
The Bitter Truth: Python 3.11 vs Cython vs C++ Performance for Simulations
There isn't any fortran code in the repo there itself but numpy itself can be linked with several numeric libraries. If you look through the wheels for numpy available on pypi, all the latest ones are packaged with OpenBLAS which uses Fortran quite a bit: https://github.com/xianyi/OpenBLAS
- Optimizing compilers reload vector constants needlessly
-
Just a quick question, can a programming language be as fast as C++ and efficient with as simple syntax like Python?
Sure - write functions in another language, export C bindings, and then call those functions from Python. An example is NumPy - a lot of its linear algebra functions are implemented in C and Fortran.
- OpenBLAS - optimized BLAS library based on GotoBLAS2 1.13 BSD version
-
How to include external libraries?
Read the official docs yet?
tensorflow
-
Side Quest Devblog #1: These Fakes are getting Deep
# L2-normalize the encoding tensors image_encoding = tf.math.l2_normalize(image_encoding, axis=1) audio_encoding = tf.math.l2_normalize(audio_encoding, axis=1) # Find euclidean distance between image_encoding and audio_encoding # Essentially trying to detect if the face is saying the audio # Will return nan without the 1e-12 offset due to https://github.com/tensorflow/tensorflow/issues/12071 d = tf.norm((image_encoding - audio_encoding) + 1e-12, ord='euclidean', axis=1, keepdims=True) discriminator = keras.Model(inputs=[image_input, audio_input], outputs=[d], name="discriminator")
-
Google lays off its Python team
[3]: https://github.com/tensorflow/tensorflow/graphs/contributors
- TensorFlow-metal on Apple Mac is junk for training
-
🔥🚀 Top 10 Open-Source Must-Have Tools for Crafting Your Own Chatbot 🤖💬
To get up to speed with TensorFlow, check their quickstart Support TensorFlow on GitHub ⭐
- One .gitignore to rule them all
-
10 Github repositories to achieve Python mastery
Explore here.
-
GitHub and Developer Ecosystem Control
Part of the major userbase pull in GitHub revolves around hosting a considerable number of popular projects including Angular, React, Kubernetes, cpython, Ruby, tensorflow, and well even the software that powers this site Forem.
-
Non-determinism in GPT-4 is caused by Sparse MoE
Right but that's not an inherent GPU determinism issue. It's a software issue.
https://github.com/tensorflow/tensorflow/issues/3103#issueco... is correct that it's not necessary, it's a choice.
Your line of reasoning appears to be "GPUs are inherently non-deterministic don't be quick to judge someone's code" which as far as I can tell is dead wrong.
Admittedly there are some cases and instructions that may result in non-determinism but they are inherently necessary. The author should thinking carefully before introducing non-determinism. There are many scenarios where it is irrelevant, but ultimately the issue we are discussing here isn't the GPU's fault.
-
Can someone explain how keras code gets into the Tensorflow package?
and things like y = layers.ELU()(y) work as expected. I wanted to see a list of the available layers so I went to the Tensorflow GitHub repository and to the keras directory. There's a warning in that directory that says:
-
Is it even possible to design a ML model without using Python or MATLAB? Like using C++, C or Java?
Exactly what language do you think TensorFlow is written in? :)
What are some alternatives?
Eigen
PaddlePaddle - PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
GLM - OpenGL Mathematics (GLM)
Prophet - Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.
cblas - Netlib's C BLAS wrapper: http://www.netlib.org/blas/#_cblas
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
blaze
LightGBM - A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
Boost.Multiprecision - Boost.Multiprecision
scikit-learn - scikit-learn: machine learning in Python
ceres-solver - A large scale non-linear optimization library
LightFM - A Python implementation of LightFM, a hybrid recommendation algorithm.