Why Lisp? (2015)

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • Scout APM - Less time debugging, more time building
  • OPS - Build and Run Open Source Unikernels
  • SonarQube - Static code analysis for 29 languages.
  • GitHub repo babashka

    Native, fast starting Clojure interpreter for scripting

    Clojure through Babashka[0]. It's great for small tasks not related to your favorite editor. I use it for most of my scripting except for the really easy things in bash.

    [0] https://babashka.org/

  • GitHub repo hissp

    It's Python with a Lissp.

    If you're already familiar with Python libraries, check out [Hissp](https://github.com/gilch/hissp), which compiles to Python expressions.

  • Scout APM

    Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.

  • GitHub repo racket

    The Racket repository

    I'd say the "python of lisp" is definitely racket.

    Beginner friendly, lots of batteries included, large community with lots of libraries.

    Here's an example of a simple webserver using the standard library: https://docs.racket-lang.org/web-server/run.html#%28part._se...

    I'm also a fan of the first tutorial on the site. Instead of the boring stuff like hello world, 2+2, print your name etc, they use a built in library for drawing shapes to the console, and learn how to build up procedures for drawing more complex objects.

    It feels like learning to code with logo (which they also have a clone built in of https://docs.racket-lang.org/logo/index.html)

    All the good stuff said, I've never used it for system plumbing scripts like I do with python. I'd imagine it's worse for that due to the sheer popularity python has.

    https://racket-lang.org/

  • GitHub repo janet

    A dynamic language and bytecode vm

    I haven't tried it, but there is Janet (https://janet-lang.org/) that seems pretty well suited for building small scripts. It was submitted a few times recently here on HN.

  • GitHub repo coalton

    Coalton is an efficient, statically typed functional programming language that supercharges Common Lisp.

    I'm fairly new to lisp programming, but there's a language called Coalton that provides static type checking to Common Lisp. I believe both languages are one and the same, but Coalton provides some type guarantees while still being able to have all of the interactiveness CL devs are used to.

    https://github.com/coalton-lang/coalton

  • GitHub repo bel

    An interpreter for Bel, Paul Graham's Lisp language (by masak)

    If I may be so bold as to recommend my in-progress Bel implementation: https://github.com/masak/bel

    Caveat: I'm still working towards being able to recommend Bel _unconditionally_, not just for small programs. Right now you'll experience unreasonable slowness, terse/uninformative error messages, and missing documentation -- probably in that order. All of those are being addressed. But already today, it's fun to play with.

  • GitHub repo cmu-infix

    Updated infix.cl of the CMU AI repository, originally written by Mark Kantrowitz

    (The list of forms are passed, unevaluated and at compile time, to nest, which rewrites them using a right fold to nest things properly.)

    Somewhat similar is the arrow macro that Clojure popularized, which lets you get rid of (deep (nesting (like (this ...)))) where you have to remember evaluation order is inside-out and replace it with a flatter (-> (this ...) like nesting deep). Its implementation is also easy -- many macros are easy to write because Lisp's source code is itself a list data structure for which you can write code to process and manipulate just like any other lists.

    Another cool macro that's been around since 1993 is https://github.com/quil-lang/cmu-infix which lets you write math in infix style, e.g. #I( C[i, k] += A[i, j] * B[j, k] ) where A, B, and C are all matrices represented as 2D arrays. It's a lot more complicated than the nest macro, though.

    There are some other things that still make Lisp great in comparison to other languages, but they don't exactly have one-line code examples like [::-1] and so I'll just describe them qualitatively. Common Lisp has CLOS, the first standardized OOP system. It's a lot more powerful than C++'s system. It differs from many systems in that classes and methods are separate; among other things this gives you multiple dispatch (you can define polymorphic methods that don't just dispatch to different code depending on the first argument (the explicit 'self' in Python, implicit 'this' in other langs) but all arguments). One thing it can be useful for is to get rid of many laborious uses of the Builder and Visitor patterns. e.g. the need for double dispatch is a common reason to use the Visitor pattern, but in Lisp there's no need. CLOS also does "method combination", which lets you define :before, :after, and :around methods that operate implicitly before/after/around a call. This gets rid of the Observer pattern, supports design-by-contract, and jives well with multiple inheritance in that you can create "mixins" that classes can "inherit" from with the only behavior being some :before/:after methods. (e.g. logging, or cleaning up resources, or validation.)

    Everything is truly dynamic -- a class can even change its type at runtime, which may be an acceptable solution to the circle-ellipse problem, or just super convenient while developing. More fundamentally, "compile" is a built-in function, not something you have to do with a separate program. "Disassemble" is built-in, too, so you can see what the compiler is doing and how optimized something is. You have full flexibility to define and redefine how your program works as it's running, no need to restart and lose state if you don't want to. Besides being killer for development (and all the differences in development experience comprise a big part of why I still think Lisp is great compared to non-Lisp), this gives you a powerful way to do production debugging and hot-fixing too -- a footgun you might not necessarily want most of the time, but you don't have to do anything special for it when you do want it. It can be very useful, e.g. if you've got a spacecraft 100 million miles from Earth https://flownet.com/gat/jpl-lisp.html I've also put some hobby stuff on a server, just deployed as a single binary, but built so that if I want to change it, I can either stop it, replace the binary, and start again, or just SSH in and with SSH forwarding connect to the live program with my editor and load the new code changes just like I would when developing locally, and thus have zero downtime.

    Lastly, Lisp's solution to error handling goes beyond traditional exception handling. Again this ties into the development experience -- you have some compile-time warnings depending on the implementation (e.g. typos, undefined functions, bad types) but you'll hit runtime errors eventually, Lisp provides the condition system to help deal with them. It can be used for signaling non-errors, which has its uses, but what you'll see first are probably unhandled errors. By default one will drop you into a debugger where the error occurred, the stack isn't immediately unwound. Here you can do whatever -- inspect/change variables on different stack frame levels, recompile code if there's a way to fix things, restart computation at a specific frame... You'll also be given the option of "restarts", which might include just an "abort" that unwinds to the top level (possibly ending a thread) but can include custom actions as well that could resolve the error in different ways. For example, if you're parsing a CSV file and hit a value that is wrong somehow (empty, bad type, illegal value, bad word, whatever), your restarts might be to provide your own value or some default value (which will be used, and the computation resumes to parse the next value in the row), or skip the whole row (moving on to the next one), or skip the whole file (moving on to the next file, or finishing). Again this can be very useful while debugging, but in production you can either program in default resolutions (and a catch-all handler that logs unhandled errors, as usual) or give the choice to the user (in a friendlier way than exposing the debugger if you please).

  • OPS

    OPS - Build and Run Open Source Unikernels. Quickly and easily build and deploy open source unikernels in tens of seconds. Deploy in any language to any cloud.

  • GitHub repo awesome-lisp-companies

    Awesome Lisp Companies

    also me o/ and https://github.com/azzamsa/awesome-lisp-companies/ (but I understand your feeling^^)

  • GitHub repo aws-api

    AWS, data driven

    Related to this, there's an AWS SDK for Clojure [0] (created by the same people who are behind Clojure), which is generated from the AWS specs themselves. Carmine, a popular Clojure library for Redis does something very similar. I suspect doing the same in CL would be similarly simple.

    [0] https://github.com/cognitect-labs/aws-api

  • GitHub repo carmine

    Redis client and message queue for Clojure

  • GitHub repo paip-lisp

    Lisp code for the textbook "Paradigms of Artificial Intelligence Programming"

    i think PAIP is an absolute gem! not just for learning lisp, but for software engineering in general. not only that but you can read it online in .md format with proper syntax highlighting, which provides for a much better experience. and in fact not only even that (!!), but if you use org-mode you can convert .md files to .org files flawlessly via pandoc and enjoy the whole experience interactively (like jupyter notebook, but only on a whole different leve)

    useful link:

    https://github.com/norvig/paip-lisp

    https://orgmode.org/

    https://emacs.stackexchange.com/questions/5465/how-to-migrat...

  • GitHub repo github-orgmode-tests

    This is a test project where you can explore how github interprets Org-mode files

    i think PAIP is an absolute gem! not just for learning lisp, but for software engineering in general. not only that but you can read it online in .md format with proper syntax highlighting, which provides for a much better experience. and in fact not only even that (!!), but if you use org-mode you can convert .md files to .org files flawlessly via pandoc and enjoy the whole experience interactively (like jupyter notebook, but only on a whole different leve)

    useful link:

    https://github.com/norvig/paip-lisp

    https://orgmode.org/

    https://emacs.stackexchange.com/questions/5465/how-to-migrat...

  • GitHub repo julia

    The Julia Programming Language

    there was dylan before julia, so julia might just be reinventing dylan :) but that's not what's interesting

    the julia project is in fact very interesting to me and has a great team developing its ecosystem and i work with it alongside python for numerical work. however one key drawback (compared to common lisp) for me is that it is not self-compiled. it is hosted on llvm and over 30% of its repository is in another language (mainly C and C++)[0]. but i am saying this only in comparison to common lisp. other languages are not different to this, and are much worse. as far as scientific computing is concerned, i would work with julia over python any day

    [0] https://github.com/JuliaLang/julia

  • GitHub repo cl4py

    Common Lisp for Python

  • GitHub repo py4cl

    Call python from Common Lisp

  • GitHub repo CUDA.jl

    CUDA programming in Julia.

    > You can write a lot of macrology to get around it, but there's a point where you want actual compiler writers to be doing this

    this is not the job of compiler writers (although writing macros is akin to writing a compiler but i do not think that this is what you mean). in julia the numerical programming packages are not part of the standard library and a lot of it is wrappers around C++ code especially when the drivers to the underlining hardware are closed-source [0]. also here is the similar library in common lisp [1]

    [0] https://github.com/JuliaGPU/CUDA.jl

    [1] https://github.com/takagi/cl-cuda

  • GitHub repo cl-cuda

    Cl-cuda is a library to use NVIDIA CUDA in Common Lisp programs.

    > You can write a lot of macrology to get around it, but there's a point where you want actual compiler writers to be doing this

    this is not the job of compiler writers (although writing macros is akin to writing a compiler but i do not think that this is what you mean). in julia the numerical programming packages are not part of the standard library and a lot of it is wrappers around C++ code especially when the drivers to the underlining hardware are closed-source [0]. also here is the similar library in common lisp [1]

    [0] https://github.com/JuliaGPU/CUDA.jl

    [1] https://github.com/takagi/cl-cuda

  • GitHub repo LoopVectorization.jl

    Macro(s) for vectorizing loops.

    Yes, and sorry if I also came off as combative here, it was not my intention either. I've used some Common Lisp before I got into Julia (though I never got super proficient with it) and I think it's an excellent language and it's too bad it doesn't get more attention.

    I just wanted to share what I think is cool about julia from a metaprogramming point of view, which I think is actually its greatest strength.

    > here is a hypothetical question that can be asked: would a julia programmer be more powerful if llvm was written in julia? i think the answer is clear that they would be

    Sure, I'd agree it'd be great if LLVM was written in julia. However, I also don't think it's a very high priority because there are all sorts of ways to basically slap LLVM's hands out of the way and say "no, I'll just do this part myself."

    E.g. consider LoopVectorization.jl [1] which is doing some very advanced program transformations that would normally be done at the LLVM (or lower) level. This package is written in pure Julia and is all about bypassing LLVM's pipelines and creating hyper efficient microkernels that are competitive with the handwritten assembly in BLAS systems.

    To your point, yes Chris' life likely would have been easier here if LLVM was written in julia, but also he managed to create this with a lot less man-power in a lot less time than anything like it that I know of, and it's screaming fast so I don't think it was such a huge impediment for him that LLVM wasn't implemented in julia.

    [1] https://github.com/JuliaSIMD/LoopVectorization.jl

  • GitHub repo tweetnacl

    Because that product was an embedded system running on a very small SoC. It only had 1MB of flash and 192k of SRAM. It's theoretically possible to run CL on a system that small -- Coral Common Lisp ran on a Mac Plus with 1MB of RAM back in the 1980s -- but nothing off-the-shelf will do that today.

    (I did, however, put a little Scheme interpreter on it as an easter egg :-)

    I do have some CL code that supports the crypto project. The back-end for this:

    https://stage.sc4.us/sc4/sc4tk.html

    is written in CL (though all the actual encryption is done client-side in Javascript). I also have some prototype crypto code that I don't really use for anything, including this double-ratchet implementation:

    https://github.com/rongarret/tweetnacl/blob/master/ratchet.l...

    and some elliptic curve code:

    http://www.flownet.com/ron/lisp/djbec.lisp

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts