compiler-explorer
OpenBLAS
compiler-explorer | OpenBLAS | |
---|---|---|
191 | 22 | |
15,198 | 5,974 | |
1.5% | 1.5% | |
9.9 | 9.8 | |
about 14 hours ago | 4 days ago | |
TypeScript | C | |
BSD 2-clause "Simplified" License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
compiler-explorer
-
What if null was an Object in Java?
At least on android arm64, looks like a `dmb ishst` is emitted after the constructor, which allows future loads to not need an explicit barrier. Removing `final` from the field causes that barrier to not be emitted.
https://godbolt.org/#g:!((g:!((g:!((h:codeEditor,i:(filename...
- Ask HN: Which books/resources to understand modern Assembler?
-
3rd Edition of Programming: Principles and Practice Using C++ by Stroustrup
You said You won't get "extreme performance" from C++ because it is buried under the weight of decades of compatibility hacks.
Now your whole comment is about vector behavior. You haven't talked about what 'decades of compatibility hacks' are holding back performance. Whatever behavior you want from a vector is not a language limitation.
You could write your own vector and be done with it, although I'm still not sure what you mean, since once you reserve capacity a vector still doubles capacity when you overrun it. The reason this is never a performance obstacle is that if you're going to use more memory anyway, you reserve more up front. This is what any normal programmer does and they move on.
Show what you mean here:
https://godbolt.org/
I've never used ISPC. It's somewhat interesting although since it's Intel focused of course it's not actually portable.
I guess now the goal posts are shifting. First it was that "C++ as a language has performance limitations" now it's "rust has a vector that has a function I want and also I want SIMD stuff that doesn't exist. It does exist? not like that!"
Try to stay on track. You said there were "decades of compatibility hacks" holding back C++ performance then you went down a rabbit hole that has nothing to do with supporting that.
-
C++ Insights – See your source code with the eyes of a compiler
C++ Insights is available online at https://cppinsights.io/
It is also available at a touch of a button within the most excellent https://godbolt.org/
along side the button that takes your code sample to https://quick-bench.com/
Those sites and https://cppreference.com/ are what I'm using constantly while coding.
I recently discovered https://whitebox.systems/ It's a local app with a $69 one-time charge. And, it only really works with "C With Classes" style functions. But, it looks promising as another productivity boost.
-
Ask HN: How can I learn about performance optimization?
[P&H RISC] https://www.google.com/books/edition/_/e8DvDwAAQBAJ
Compiler Explorer by Matt Godbolt [Godbolt] can help better understand what code a compiler generates under different circumstances.
[Godbolt] https://godbolt.org
The official CPU architecture manuals from CPU vendors are surprisingly readable and information-rich. I only read the fragments that I need or that I am interested in and move on. Here is the Intel’s one [Intel]. I use the Combined Volume Set, which is a huge PDF comprising all the ten volumes. It is easier to search in when it’s all in one file. I can open several copies on different pages to make navigation easier.
Intel also has a whole optimization reference manual [Intel] (scroll down, it’s all on the same page). The manual helps understand what exactly the CPU is doing.
[Intel] https://www.intel.com/content/www/us/en/developer/articles/t...
Personally, I believe in automated benchmarks that measure end-to-end what is actually important and notify you when a change impacts performance for the worse.
-
Managing mutable data in Elixir with Rust
Let's compile it with https://godbolt.org/, turn on some optimisations and inspect the IR (-O2 -emit-llvm). Copying out the part that corresponds to the while loop:
4:
-
Free MIT Course: Performance Engineering of Software Systems
resources were extra useful when building deeper intuitions about GPU performance for ML models at work and in graduate school.
- CMU's "Deep Learning Systems" Course is hosted online and has YouTube lectures online. While not generally relevant to software performance, it is especially useful for engineers interested in building strong fundamentals that will serve them well when taking ML models into production environments: https://dlsyscourse.org/
- Compiler Explorer is a tool that allows you easily input some code in and check how the assembly output maps to the source. I think this is exceptionally useful for beginner/intermediate programmers who are familiar with one compiled high-level language and have not been exposed to reading lots of assembly. It is also great for testing how different compiler flags affect assembly output. Many people used to coding in C and C++ probably know about this, but I still run into people who haven't so I share it whenever performance comes up: https://godbolt.org/
-
Verifying Rust Zeroize with Assembly...including portable SIMD
To really understand what's going on here we can look at the compiled assembly code. I'm working on a Mac and can do this using the objdump tool. Compiler Explorer is also a handy tool but doesn't seem to support Arm assembly which is what Rust will use when compiling on Apple Silicon.
- 4B If Statements
-
Operator precedence doubt
Play around with it in godbolt if you're really curious: https://godbolt.org/
OpenBLAS
-
LLaMA Now Goes Faster on CPUs
The Fortran implementation is just a reference implementation. The goal of reference BLAS [0] is to provide relatively simple and easy to understand implementations which demonstrate the interface and are intended to give correct results to test against. Perhaps an exceptional Fortran compiler which doesn't yet exist could generate code which rivals hand (or automatically) tuned optimized BLAS libraries like OpenBLAS [1], MKL [2], ATLAS [3], and those based on BLIS [4], but in practice this is not observed.
Justine observed that the threading model for LLaMA makes it impractical to integrate one of these optimized BLAS libraries, so she wrote her own hand-tuned implementations following the same principles they use.
[0] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprogra...
[1] https://github.com/OpenMathLib/OpenBLAS
[2] https://www.intel.com/content/www/us/en/developer/tools/onea...
[3] https://en.wikipedia.org/wiki/Automatically_Tuned_Linear_Alg...
[4]https://en.wikipedia.org/wiki/BLIS_(software)
- Assume I'm an idiot - oogabooga LLaMa.cpp??!
-
Learn x86-64 assembly by writing a GUI from scratch
Yeah. I'm going to be helping to work on expanding CI for OpenBlas and have been diving into this stuff lately. See the discussion in this closed OpenBlas issue gh-1968 [0] for instance. OpenBlas's Skylake kernels do rely on intrinsics [1] for compilers that support them, but there's a wide range of architectures to support, and when hand-tuned assembly kernels work better, that's what are used. For example, [2].
[0] https://github.com/xianyi/OpenBLAS/issues/1968
[1] https://github.com/xianyi/OpenBLAS/blob/develop/kernel/x86_6...
[2] https://github.com/xianyi/OpenBLAS/blob/23693f09a26ffd8b60eb...
-
AI’s compute fragmentation: what matrix multiplication teaches us
We'll have to wait until part 2 to see what they are actually proposing, but they are trying to solve a real problem. To get a sense of things check out the handwritten assembly kernels in OpenBlas [0]. Note the level of granularity. There are micro-optimized implementations for specific chipsets.
If progress in ML will be aided by a proliferation of hyper-specialized hardware, then there really is a scalability issue around developing optimized matmul routines for each specialized chip. To be able to develop a custom ASIC for a particular application and then easily generate the necessary matrix libraries without having to write hand-crafted assembly for each specific case seems like it could be very powerful.
[0] https://github.com/xianyi/OpenBLAS/tree/develop/kernel
-
Trying downloading BCML
libraries mkl_rt not found in ['C:\python\lib', 'C:\', 'C:\python\libs'] ``` Install this and try again. Might need to reboot, never know with Windows https://www.openblas.net/
-
The Bitter Truth: Python 3.11 vs Cython vs C++ Performance for Simulations
There isn't any fortran code in the repo there itself but numpy itself can be linked with several numeric libraries. If you look through the wheels for numpy available on pypi, all the latest ones are packaged with OpenBLAS which uses Fortran quite a bit: https://github.com/xianyi/OpenBLAS
- Optimizing compilers reload vector constants needlessly
-
Just a quick question, can a programming language be as fast as C++ and efficient with as simple syntax like Python?
Sure - write functions in another language, export C bindings, and then call those functions from Python. An example is NumPy - a lot of its linear algebra functions are implemented in C and Fortran.
- OpenBLAS - optimized BLAS library based on GotoBLAS2 1.13 BSD version
-
How to include external libraries?
Read the official docs yet?
What are some alternatives?
C++ Format - A modern formatting library
Eigen
rust - Empowering everyone to build reliable and efficient software.
GLM - OpenGL Mathematics (GLM)
format-benchmark - A collection of formatting benchmarks
cblas - Netlib's C BLAS wrapper: http://www.netlib.org/blas/#_cblas
papers - ISO/IEC JTC1 SC22 WG21 paper scheduling and management
blaze
rustc_codegen_gcc - libgccjit AOT codegen for rustc
Boost.Multiprecision - Boost.Multiprecision
firejail - Linux namespaces and seccomp-bpf sandbox
ceres-solver - A large scale non-linear optimization library