april
Co-dfns
april | Co-dfns | |
---|---|---|
55 | 26 | |
627 | 756 | |
0.5% | 1.5% | |
8.8 | 9.3 | |
about 1 month ago | 24 days ago | |
Common Lisp | APL | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
april
-
Array Languages for Clojurians (2020)
Other work on integrating APL and lisp is here: https://github.com/phantomics/april
- Ask HN: 30y After 'On Lisp', PAIP etc., Is Lisp Still "Beating the Averages"?
-
Arthur Whitney releases K source with MIT license
Try J or APL, K, BQN, or April, and be prepared to rethink how you implement solutions to problems you've tackled in other PLs. I am an array language user and fan. I have been playing with April and I use J regularly at home and sometimes for work when I can.
From the April github site: "April compiles a subset of the APL programming language into Common Lisp. Leveraging Lisp's powerful macros and numeric processing faculties, it brings APL's expressive potential to bear for Lisp developers. Replace hundreds of lines of number-crunching code with a single line of APL."
https://github.com/phantomics/april
-
Thinking in an Array Language
There are attempts to combine those...
April (Array Programming Re-Imagined in Lisp)
https://github.com/phantomics/april
> operations that apply to the whole array
like MAP and REDUCE, higher order functions are not really new to Lisp. In Common Lisp they are extended to vectors.
> list languages and array languages are quite different.
There are some common things like interactive use, functional flavor, etc.
- April
-
A Personal History of APL (1982)
There's also April APL: https://github.com/phantomics/april
Also the array language family seems to be stronger than ever with foss: ngn/k, BQN, uiua, and of course J but as you mentioned they're all different languages.
-
The C juggernaut illustrated (2012)
I love J and APL, but April takes the cake for me[1]. APL in Lisp.
I also prefer SPARK2014 instead of Rust if I am not going to use C. I've started learning Rust a few times. SPARK2014 is easier to get going for me, and it has been used to produce high-integrity software and real-world applications for over a decade, and more if you include Ada from which it sprang[2].
[1] https://github.com/phantomics/april
[2] https://www.adacore.com/about-spark
-
Erlang: The coding language that finance forgot
The one big use case was RabbitMQ in a messaging app, not HFT. I doubt Elixir even with Nx can compete with low-level HFT code. Python DL/ML code libraries are just wrappers around C too. Maybe if BeamAsm and Nx are used Elixir could be used for more numerical or not just distributed applications.
I've programmed in Python and Julia, and when I worked at an engineering (mechanical, entertainment engineering) company, Julia was great for its similarity to Matlab. I am a self-taught engineer, so I did not get pulled into Matlab in college.
Personally, I took to Erlang, so I could write plugins for Wings3D back in the early 2000s, but I never stuck with Erlang, or Wings3D (Blender3D was my choice and I even contributed to have it go opensource way back when). I like Erlang's syntax better for some reason, although Elixir's is beautiful too. I was not a Ruby programmer, and I had delved into Haskell and Prolog, so I think Erlang made more sense to me. I think Elixir has a lot more momentum behind it than Erlang, but at the root it's Erlang, so I think I'll stick with Erlang for BEAM apps. My favorite language is April[1] (APL in Lisp), and given my love of J, would be a better fit for any finance apps I might write. I am trying to convert some of the Lisp code in this book, "Professional Automated Trading: Theory and Practice" to April.
Maybe I'll write some equivalent Elixir code to compare.
[1] https://github.com/phantomics/april
-
Learn Lisp the Hard Way
I'm also very curious for hear from expert lispers. I've tried to find the sweat spot where lisp would fit better than what I already know: shell for glue and file ops, R for data munging and vis, python to not reinvent things, perl/core-utils for one liners. But before I can find the niche, I get turned off by the amount of ceremony -- or maybe just how different the state and edit/evaluate loop is.
I'm holding onto some things that make common lisp look exciting and useful (static typing[0], APL DSL[1], speed [2,3,4]) and really want to get familiar with structural editing [5]
[0] https://github.com/phantomics/april - APL dsl
-
The APL Programming Language Source Code (2012)
The 2 0 at the start of the APL line above controls the mirroring behavior. The second number can be set to 0 or 1 to choose which side of the image to mirror, while the 2 sets the axis along which to mirror. This will be 1 or 2 for a raster image but this function can mirror any rank of array on any axis.
April was used to teach image filtering in a programming class for middle-schoolers, you can see a summary in this video: https://vimeo.com/504928819
For more APL-driven graphics, April's repo includes an ncurses demo featuring a convolution kernel powered by ⌺, the stencil operator: https://github.com/phantomics/april/tree/master/demos/ncurse...
Co-dfns
-
APL Interpreter – An implementation of APL, written in Haskell
> Array programming is similar to functional programming – the primary way to control execution involves composition of functions – but APL tends to encourage the reliance on global properties and sweeping operations rather than low-level recursion4.
This AoC solution is, indeed, quite the functionista! Better yet, it leans heavily into point-free expressions. The style is pretty popular amongst APL language enthusiasts and puzzlers.
That said, you actually see quite different APL styles in the wild:
- Pointed declarative style, also a popular with functional programmers (e.g. anything like this[0])
- Imperative, structured programming, very common in legacy production systems (e.g. this[1] OpenAI API interface)
- Object-oriented, also common in somewhat newer production environments (e.g. the HTTP interface[2])
- Data-parallel style (e.g. Co-dfns[3])
Heck, APL even has lexical and dynamic scope coexisting together. IMHO, it's truly underrated as a language innovator.
[0]:https://dfns.dyalog.com/c_match.htm
[1]:https://github.com/Dyalog/OpenAI/blob/main/source/OpenAI.apl...
[2]:https://github.com/Dyalog/HttpCommand/blob/master/source/Htt...
[3]:https://github.com/Co-dfns/Co-dfns/blob/master/cmp/PS.apl
- Co-dfns: High-performance, reliable, and parallel APL
-
Notation as a Tool of Thought (1979)
These are all cribbed from the Co-dfns[0] compiler and related musings. The key insight here is that what would be API functions or DSL words are just APL expressions on carefully designed data. To pull this off, all the design work that would go into creating an API goes into designing said data to make such expressions possible.
In fact, when you see the above in real code, they are all variations on the theme, tailored to the specific needs of the immediate sub-problem. As library functions, such needs tend to accrete functions and function parameters into our library methods over time, making them harder to understand and visually noisier in the code.
To my eyes, the crux is that our formal language is _discovered_ not handed down from God. As I'm sure you're excruciatingly aware, that discovery process means we benefit from the flexibility to quickly iterate on the _entire architecture_ of our code, otherwise we end up with baked-in obsolete assumptions and the corresponding piles of workarounds.
In my experience, the Iversonian languages provide architectural expressability and iterability _par excellence_.
[0]:https://github.com/Co-dfns/Co-dfns/tree/master
-
System Design of a Cellular APL Computer
My YAML loader[0] is where I first broke through the wall. It's still languishing in a relatively proof-of-concept state but does exhibit the basic design principles.
There's also a Metamath verifier that does parallel proof verification on the GPU. It's unpublished right now because the whole thing is just a handful of handwritten code in my notebook at the moment. Hoping to get this out this month, actually.
A DOOM port is bouncing around in my notes as well as a way to explore asynchronous APL.
I'm also helping Aaron Hsu in his APL compiler[1] for stuff adjacent to my professional work, which I can't comment on much, unfortunately.
Et hoc genus omne
[0]:https://github.com/xelxebar/dayaml
[1]:https://github.com/Co-dfns/Co-dfns
-
Ask HN: Should I learn COBOL at 14yo in 2025?
What a perceptive question!
Learning boring technology and invisible infrastructure definitely can pay dividends. I don't think it's worth learning in isolation, but if you also engage in the relevant communities (think mailing lists, in-person conferences, company events, etc.) then the effort pays good dividends IME.
I'm a bit biased, but I recommend looking at APL[0]. For one, it has a legacy almost as old as COBOL, with large pieces of European infrastructure running on the language. At the same time, it's cutting edge in both performance and the software architecture principles it encourages. Heck, APL even runs on GPUs these days [1], boasting a solid route for learning modern GPU programming as well.
Also, the company behind the leading APL implementation these days, Dyalog[0], has some of the friendliest outreach around, and their yearly conferences are some of my favorites to attend.
Disclaimer: I am kind of in love with APL as a greenfield development language. Feel free to email me personally if you have any questions. Address is in my HN profile.
[0]:https://dyalog.com/
[1]:https://github.com/Co-dfns/Co-dfns
-
Co-Dfns v5.7.0
Huh. This is the commit that introduces the register model: https://github.com/Co-dfns/Co-dfns/commit/f89919144f22441f21...
In the compiler, it's working with dense adjacency matrix representations, so this will be at best O(n^2) in whatever the relevant sort of node is (expressions? functions?). "at present is not optimized" seems a bit of an understatement here: I've never heard of these sorts of graph algorithms being possible with good asymptotics in an array style. In practice, I think any value of Co-dfns would be more in the fact that it emits GPU code than that it does it quickly, but this calls into question the value of writing it in APL, I would say (for what it's worth, I believe Co-dfns is still not able to compile itself).
The phrase "allocate all intermediate arrays" seems fairly confusing: what's actually being allocated must be space for the pointers to these arrays and other metadata, not the data of the array itself. As the data is variable-size, it can't be fully planned out at compile time, and I'm fairly sure the register allocation is done when there's not enough shape or type information to even make an attempt at data allocation. This change can only improve constant overhead for primitive calls, and won't be relevant to computations where most of the work is on large arrays. I think it's helpful for Co-dfns in a practical sense, but not too exciting as it only helps with bad cases: if this is important for a user then they'd probably end up with better performance by switching to a statically-typed language when it comes time to optimize.
Constant lifting within the compiler is pretty cool, I'll have to look into that.
-
Tacit Programming
And if anyone wants an absolute masterclass in tacit programming, have a look at Aaron's Co-dfns compiler. The README has extensive reference material. https://github.com/Co-dfns/Co-dfns/
- YAML Parser for Dyalog APL
-
HVM updates: simplifications, finally runs on GPUs, 80x speedup on RTX 4090
This always seemed like a very interesting project; we need to get to the point where, if things can run in parallel, they must run in parallel to make software more efficient on modern cpu/gpu.
It won't attract funds, I guess, but it would be far more trivial to make this work with an APL or a Lisp/Scheme. There already is great research for APL[0] and looking at the syntax of HVM-core it seems it is rather easy to knock up a CL DSL. If only there were more hours in a day.
[0] https://github.com/Co-dfns/Co-dfns
- Co-Dfns
What are some alternatives?
APL - another APL derivative
dex-lang - Research language for array processing in the Haskell/ML family
lisp-matrix - A matrix package for common lisp building on work by Mark Hoemmen, Evan Monroig, Tamas Papp and Rif.
ngn-apl - An APL interpreter written in JavaScript. Runs in a browser or NodeJS.
stumpwm - The Stump Window Manager
BQN - An APL-like programming language