array
ride
Our great sponsors
array | ride | |
---|---|---|
4 | 5 | |
187 | 192 | |
- | 2.6% | |
6.9 | 9.2 | |
3 months ago | 25 days ago | |
C++ | JavaScript | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
array
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
I should have mentioned somewhere, I disabled threading for OpenBLAS, so it is comparing one thread to one thread. Parallelism would be easy to add, but I tend to want the thread parallelism outside code like this anyways.
As for the inner loop not being well optimized... the disassembly looks like the same basic thing as OpenBLAS. There's disassembly in the comments of that file to show what code it generates, I'd love to know what you think is lacking! The only difference between the one I linked and this is prefetching and outer loop ordering: https://github.com/dsharlet/array/blob/master/examples/linea...
This gets to 90% of BLAS: https://github.com/dsharlet/array/blob/38f8ce332fc4e26af0832...
But this is quite general. Iām claiming you can beat BLAS if you have some unique knowledge of the problem that you can exploit. For example, some kinds of sparsity can be implemented within the above example code yet still far outperform the more general sparsity supported by MKL and similar.
-
A basic introduction to NumPy's einsum
Compilers can be pretty good if you help them out a bit. Here's my implementation of Einstein reductions (including summations) in C++, which generate pretty close to ideal code until you start getting into processor architecture specific optimizations: https://github.com/dsharlet/array#einstein-reductions
If you are looking for something like this in C++, here's my attempt at implementing it: https://github.com/dsharlet/array#einstein-reductions
It doesn't do any automatic optimization of the loops like some of the projects linked in this thread, but, it provides all the tools needed for humans to express the code in a way that a good compiler can turn it into really good code.
ride
-
Franz Inc. has moved the whole Allegro CL IDE to a browser-based user interface. Incl. all their Lisp development tools. One can check that out with their Allegro CL Express Edition.
Not a bad idea for current times. Dyalog APL, the only active APL compiler developer, did similar a couple of years ago with RIDE
-
Having trouble installing bqn into arch
The Linux IDE for Dyalog is Ride, packaged separately and available from Dyalog's github: https://github.com/Dyalog/ride
- -š- 2021 Day 6 Solutions -š-
- Try APL
- From Competitive Programming to APL and Array Programming
What are some alternatives?
APL - another APL derivative
array - Simple array language written in kotlin
APL.jl
ngn-apl - An APL interpreter written in JavaScript. Runs in a browser or NodeJS.
fish-shell - The user-friendly command line shell.
optimizing-the-memory-layout-of-std-tuple - Optimizing the memory layout of std::tuple
json - A tiny JSON parser and emitter for Perl 6 on Rakudo
aplette - This is a new take on an old language: APL. The goal is to pare APL down to its elegant essence. This version of APL is oriented toward scripting within a Unix-style computing environment.
adventofcode - Answers to Advent of Code
Advent-of-code - My solutions of adventofcode.com
NumPy - The fundamental package for scientific computing with Python.
AdventofCode2021