10 Years Against Division of Labor in Software

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • teliva

    Fork of Lua 5.1 to encourage end-user programming

  • I question the need for scale in 90% of the places where the tech industry has cargo-culted it. Clearly I'm still failing to articulate this. Perhaps https://news.ycombinator.com/item?id=30019146#30040616 will help triangulate on what I mean.

    > Can you clarify what you see as the alternative? Implementing everything from scratch seems absurd and so costly that there’s no point in considering this an actual option.

    Not using, reimplementing and copying are the closest thing to solutions I have right now. You're right that they're not applicable to most people in their current context. I have a day job in tech and have to deal with some cognitive dissonance every day between my day job and my open source research. The one thing I have found valuable to take to my scale-obsessed tech job is to constantly be suspicious of dependencies and constantly ask if the operational burdens justify some new feature. Just switching mindset that way from software as asset to software as liability has, I'd like to believe, helped my org's decision-making.

    > We have probably invested dev-millennia into managing copies. This is exactly what source control does. This is not a new area of investment. Merging is a giant pain in the ass and very possibly always will be. Accepting merge pain better come with some huge benefits.

    Not all copying is the same. We've learned to copy the letter 'e' so well in our writings that we don't even think about it. In this context, even if I made a tool to make copying easier and merges more reliable, that would just cause people to take on more dependencies which defeats the whole point of understanding dependencies. So tooling would be counter-productive in that direction. The direction I want to focus on is: how can we help people understand the software they've copied into their applications? _That_ is the place where I want tooling to focus. Copying is just an implementation detail, a first, imperfect, heuristic coping mechanism for going from the world we have today to the world I want to move to that has 1000x more forks and 1000x more eyeballs looking at source code. You can see some (very toy) efforts in this direction at https://github.com/akkartik/teliva

    > It’s untenable to have, e.g., everyone who works on Windows be an expert in every part of the code.

    It's frustrating to say one thing in response to counter-argument A and have someone then bring up counter-argument B because I didn't talk about it right there in the response to counter-argument A. I think this is what Plato was talking about when he ranted about the problems with the newfangled technology of writing: https://newlearningonline.com/literacies/chapter-1/socrates-.... I'm not saying everyone needs to be an expert in everything. I'm saying software should reduce the pressure on people to be experts so that we can late-bind experts to domains. Not every software sub-system should need expertise at the scale at which it is used in every possible context. My Linux laptop doesn't need to be optimized to the hilt the way Google's server farms do. Using the same scheduling algo or whatever in my laptop imposes real costs on my ability to understand my computer, without giving me the benefits Google gets from the algo.

  • mu

    Soul of a tiny new machine. More thorough tests → More comprehensible and rewrite-friendly software → More resilient society. (by akkartik)

  • "Separation of concerns is a hard-won insight."

    Absolutely. I'm arguing for separating just concerns, without entangling them with considerations of people.

    It's certainly reasonable to consider my projects toy. I consider them research:

    * https://github.com/akkartik/mu

    * https://github.com/akkartik/teliva

    "The idea that projects should take source copies instead of library dependencies is just kind of nuts..."

    The idea that projects should take copies seems about symmetric to me with taking pointers. Call by value vs call by reference. We just haven't had 50 years of tooling to support copies. Where would we be by now if we had devoted equal resources to both branches?

    "...at least for large libraries."

    How are these large libraries going for ya? Log4j wasn't exactly a shining example of the human race at its best. We're trying to run before we can walk.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • mu1

    Discontinued Prototype tree-walking interpreter back when Mu was a high-level statement-oriented language, c. 2018

  • Totally agreed!

    Here's a prototype from a few years ago where I tried to make this easier: https://github.com/akkartik/mu1#readme (read just the first few paragraphs)

    I still think the full answer lies in this direction.

  • plump

    Practically Lenient and Unimpressive Markup Parser for Common Lisp

  • Heh, that thread brings to mind a bunch of things... One advantage of CL that helps with a 'borrowing' aspect that give people the jeebies is that the unit of compilation is much smaller than the file, so you can also borrow much less. Another is that methods are decoupled from classes so there's a lot of room for extensibility. (The LLGPL interestingly provides an incentive for the open-closed principle, i.e. you extend objects you're fine, but if you modify the library's then you are subject to the LGPL.) If you haven't read Gabriel's Patterns of Software book (free pdf on his site), I think you'd enjoy it.

    Your edits won't get blown away, at least by default, since quicklisp doesn't redownload a system that it knows about or check for 'corruption'. The way quicklisp does its own versioning also means if you update quicklisp's distributions lists and a newer version of the library has come out, it'll download that one into its own new folder and leave the old one alone. There's a cleanup function to clear out old things but I don't know of a case where that gets called hidden from you under the hood.

    Maybe there's some magic and interesting stuff related to this for emacs but I'm a vim heretic ;) But in any case if you want to save stuff, you can just save it in a new or existing buffer... So options are basically as I described. To give a specific example, I have a script that fetches and parses some HTML to archive comments and I wanted to remove the HTML bits so I'd just have text, making it more markdown-y. There are lots of ways to do that and I'm pretty sure I chose one of the worst ones, but whatever. I was already using the Plump library, and after not achieving full success with its official extension mechanisms, one method I came across and stepped through was https://github.com/Shinmera/plump/blob/master/dom.lisp#L428 and I decided I could hijack it for my purposes. I started by editing and redefining it in place until I got what I wanted, but instead of saving my changes over the original, I simply copied it over to my script file, modifying slightly to account for namespaces e.g. it's "(defmethod plump-dom:text ((node plump:nesting-node))", thus redefining and overwriting the library implementation when my script is loaded and run.

    Some possible problems with this approach in general include later trying to integrate that script with other code that needs the default behavior (though CL support for :before/:after/:around auxiliary methods can help here, e.g. if I can't just subclass I could insert a seam with an :around method branching between my custom implementation over the library's without having to overwrite the library's; and in the long-term the SICL implementation will show the way to first-class environments that can allow multiple versions of stuff to co-exist nicely). Or the library could update and change the protocol, breaking my hack when I update to it. Or in other situations there may be more complex changes, like if you modify a macro but want the new modifications to apply to code using the old def, you need to redefine that old code, or if you redefine a class and want/need to specially migrate pre-existing instances you need to write an update-instance-for-redefined-class specializer, or if the changes just span across a lot of areas it may be infeasible to cherry-pick them into a 'patch' file/section of your script, so you're faced with choices on how much you want to fork the files of the lib and copy them into your own project to load. But anyway, all those possible problems are on me.

    The asdf noise isn't that big of a deal and I think is only a little related here technically since it's a rather special library situation. It's more 'interesting' socially and as a show of possible conflict from having a core piece of infrastructure provided by the implementation, but not owned/maintained by the implementation. An analogous situation would arise if gcc, clang, and visual studio all agreed to use and ship versions of musl for their libc with any other libcs being obsolete and long forgotten. A less analogous situation is the existing one that Linux distributions do -- sometimes they just distribute, sometimes they distribute-and-modify, whether they are the first place to go to report issues or whether they punt it off to upstream depends case-by-case.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts