array
BQN
Our great sponsors
array | BQN | |
---|---|---|
4 | 49 | |
187 | 827 | |
- | - | |
6.9 | 8.9 | |
3 months ago | 22 days ago | |
C++ | KakouneScript | |
Apache License 2.0 | ISC License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
array
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
I should have mentioned somewhere, I disabled threading for OpenBLAS, so it is comparing one thread to one thread. Parallelism would be easy to add, but I tend to want the thread parallelism outside code like this anyways.
As for the inner loop not being well optimized... the disassembly looks like the same basic thing as OpenBLAS. There's disassembly in the comments of that file to show what code it generates, I'd love to know what you think is lacking! The only difference between the one I linked and this is prefetching and outer loop ordering: https://github.com/dsharlet/array/blob/master/examples/linea...
This gets to 90% of BLAS: https://github.com/dsharlet/array/blob/38f8ce332fc4e26af0832...
But this is quite general. I’m claiming you can beat BLAS if you have some unique knowledge of the problem that you can exploit. For example, some kinds of sparsity can be implemented within the above example code yet still far outperform the more general sparsity supported by MKL and similar.
-
A basic introduction to NumPy's einsum
Compilers can be pretty good if you help them out a bit. Here's my implementation of Einstein reductions (including summations) in C++, which generate pretty close to ideal code until you start getting into processor architecture specific optimizations: https://github.com/dsharlet/array#einstein-reductions
If you are looking for something like this in C++, here's my attempt at implementing it: https://github.com/dsharlet/array#einstein-reductions
It doesn't do any automatic optimization of the loops like some of the projects linked in this thread, but, it provides all the tools needed for humans to express the code in a way that a good compiler can turn it into really good code.
BQN
-
Bare minimum atw-style K interpreter for learning purposes
I recommend checking BQN at https://mlochbaum.github.io/BQN/ and the YouTube channel code_report by Conor Hoekstra (and also "Composition Intuition by Conor Hoekstra | Lambda Days 2023"). It is well documented.
-
YAML Parser for Dyalog APL
I don't put a lot of stock in the "write-only" accusation. I think it's mostly used by those who don't know APL because, first, it's clever, and second, they can't read the code. However, if I remember I implemented something in J 10 years ago, I will definitely dig out the code because that's the fastest way by far for me to remember how it works.
This project specifically looks to be done in a flat array style similar to Co-dfns[0]. It's not a very common way to use APL. However, I've maintained an array-based compiler [1] for several years, and don't find that reading is a particular difficulty. Debugging is significantly easier than a scalar compiler, because the computation works on arrays drawn from the entire source code, and it's easy to inspect these and figure out what doesn't match expectations. I wrote most of [2] using a more traditional compiler architecture and it's easier to write and extend but feels about the same for reading and small tweaks. See also my review [3] of the denser compiler and precursor Co-dfns.
As for being read by others, short snippets are definitely fine. Taking some from the last week or so in the APL Farm, {⍵÷⍨+/|-/¯9 ¯11+.○?2⍵2⍴0} and {(⍸⍣¯1+\⎕IO,⍺)⊂[⎕IO]⍵} seemed to be easily understood. Forum links at [4]; the APL Orchard is viewable without signup and tends to have a lot of code discussion. There are APL codebases with many programmers, but they tend to be very verbose with long names. Something like the YAML parser here with no comments and single-letter names would be hard to get into. I can recognize, say, that c⌿¨⍨←(∨⍀∧∨⍀U⊖)∘(~⊢∊LF⍪WS⍨)¨c trims leading and trailing whitespace from each string in a few seconds, but in other places there are a lot of magic numbers so I get the "what" but not the "why". Eh, as I look over it things are starting to make sense, could probably get through this in an hour or so. But a lot of APLers don't have experience with the patterns used here.
[0] https://github.com/Co-dfns/Co-dfns
[1] https://github.com/mlochbaum/BQN/blob/master/src/c.bqn
[2] https://github.com/mlochbaum/Singeli/blob/master/singeli.bqn
[3] https://mlochbaum.github.io/BQN/implementation/codfns.html
- k on pdp11
-
Uiua: A minimal stack-based, array-based language
> Are there any other languages that use glyphs so heavily?
APL (the first, invented in the 1960s): https://en.wikipedia.org/wiki/APL_(programming_language)
BQN (a modern APL, looks like an inspiration for Uiua though I don't know): https://mlochbaum.github.io/BQN/
Too many smaller esoteric languages to count.
-
Is there a programming language that will blow my mind?
Vouch for array programming, but also BQN. Modern, very good documentation, a bit less confusing than APL imo.
-
K: We need to talk about group
There’s also at least BQN, which I suspect is the language used in those comments:
- APL: An Array Oriented Programming Language (2018)
- Show HN: Glidesort, a new stable sort in Rust up to ~4x faster for random data
-
-🎄- 2022 Day 1 Solutions -🎄-
Well, a former Dyalog APL developer did go on to create his own language based on ideas from APL called BQN, which is touted as "an APL for your flying saucer"
-
I spent the last 2 months converting APL primitives into executable NumPy
The latest APL-INSPIRED and in my opinion, best array language, is BQN: https://github.com/mlochbaum/BQN
What are some alternatives?
APL - another APL derivative
Co-dfns - High-performance, Reliable, and Parallel APL
sbcl - Mirror of Steel Bank Common Lisp (SBCL)'s official repository
Kbd - Alternative unified APL keyboard layouts (AltGr, Backtick, Compositions)
type-system-j - adds an optional type system to J language
TablaM - The practical relational programing language for data-oriented applications
futhark - :boom::computer::boom: A data-parallel functional programming language
j-prez
array - Simple array language written in kotlin
april - The APL programming language (a subset thereof) compiling to Common Lisp.
jelm - Extreme Learning Machine in J
pyret-lang - The Pyret language.