array
jelm
Our great sponsors
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
array
-
Benchmarking 20 programming languages on N-queens and matrix multiplication
I should have mentioned somewhere, I disabled threading for OpenBLAS, so it is comparing one thread to one thread. Parallelism would be easy to add, but I tend to want the thread parallelism outside code like this anyways.
As for the inner loop not being well optimized... the disassembly looks like the same basic thing as OpenBLAS. There's disassembly in the comments of that file to show what code it generates, I'd love to know what you think is lacking! The only difference between the one I linked and this is prefetching and outer loop ordering: https://github.com/dsharlet/array/blob/master/examples/linea...
-
A basic introduction to NumPy's einsum
If you are looking for something like this in C++, here's my attempt at implementing it: https://github.com/dsharlet/array#einstein-reductions
It doesn't do any automatic optimization of the loops like some of the projects linked in this thread, but, it provides all the tools needed for humans to express the code in a way that a good compiler can turn it into really good code.
jelm
- Is APL Dead?
- The Lisp OS “Mezzano” Running Native on Librebooted ThinkPads
-
Learning Common Lisp to beat Java and Rust on a phone encoding problem
I have a bunch of links to ML material for either APL or J. I don't know of any particular library for J. J is interpreted, so it is not as fast as other implementations. I am mainly using it to experiment on concepts and teach myself more ML in J because of the iterative nature of the REPL, and the succinct code. I can keep what's going on in my head, and glance at less than 100 lines, usually 15 lines, of code to refresh it.
There is a series of videos of learning neural networks in APL cited by others here on this thread.
Pandas author, Wes McKinney, cited J as an influence in his work on Pandas.
Extreme Learning Machine in J (code and PDF are here too):
https://github.com/peportier/jelm
Convolutional neural networks in APL (PDF and video on page):
https://dl.acm.org/doi/10.1145/3315454.3329960
A DSL to implement MENACE (Matchbox Educable Noughts And Crosses Engine) in APL (Noughts and Crosses or Tic-tac-toe):
https://romilly.github.io/o-x-o/an-introduction.html
What are some alternatives?
optimizing-the-memory-layout-of-std-tuple - Optimizing the memory layout of std::tuple
BQN - An APL-like programming language. Self-hosted!
NumPy - The fundamental package for scientific computing with Python.
array - Simple array language written in kotlin
cadabra2 - A field-theory motivated approach to computer algebra.
apltail - APL Compiler targeting a typed array intermediate language
alphafold2 - To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released
woo - A fast non-blocking HTTP server on top of libev
Einsum.jl - Einstein summation notation in Julia
bordeaux-threads - Portable shared-state concurrency for Common Lisp
c-examples - Example C code
j-prez