42nd-at-threadmill VS numericals

Compare 42nd-at-threadmill vs numericals and see what are their differences.

numericals

CFFI enabled SIMD powered simple-math numerical operations on arrays for Common Lisp [still experimental] (by digikar99)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
42nd-at-threadmill numericals
4 6
56 47
- -
0.0 7.7
over 1 year ago 28 days ago
Common Lisp Common Lisp
BSD 2-clause "Simplified" License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

42nd-at-threadmill

Posts with mentions or reviews of 42nd-at-threadmill. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-10-12.
  • Share a hash table with SBCL and Allegro Serve
    3 projects | /r/Common_Lisp | 12 Oct 2021
  • Revisiting Prechelt’s paper comparing Java, Lisp, C/C++ and scripting languages
    2 projects | news.ycombinator.com | 9 Aug 2021
    I agree that "C/C++" isn't a good sign, though it was more forgivable when C++ just meant C++98... For speed, nowadays, you use X. That is, many languages can be fast, especially if they have access to intrinsics, the real question is how much special knowledge and herculean effort do you have to spend? It's still true that for many problems idiomatic modern C++ will give a very nice result that may be hard to beat (though be careful if you forgot cin.sync_with_stdio(false) or you may be slower than Python!). But it's also true that C, Rust, Java, Common Lisp, and even JS all do very well out of the box these days, while languages like Python and Ruby have lagged. For this problem, the author had to spend quite a bit of effort in a followup post to get Rust to match some 20 year old unoptimized CL.

    If one wanted to optimize Lisp, a simple place to start is with some type declarations (at least with SBCL). It can even be kind of fun to have the compiler yell at you because e.g. it couldn't infer a type somewhere and was forced to do a generic add, so you tell it some things are fixnums, see the message go away, and verify if you want with DISASSEMBLE that it's now using LEA instead of a CALL. For an example of going to quite a bit of trouble (relatively) with adding inline hints, removing unneeded array bounds checks, and SIMD instructions, see https://github.com/telekons/42nd-at-threadmill which I believe is quite a bit faster than a similar "blazing fast" Rust project featured on HN not long ago. But my point again isn't so much that CL is the fastest or has some fast projects, just that it can be made as fast as you're probably going to want. This applies to a lot of other languages too, though CL still feels pretty unique in supporting both high level and low level constructs.

    Your GC comment is weird, since any publications looking at total performance invariable include the GC overhead, nothing is hidden. The TIME macro for micro-benchmarking even typically reports on allocated memory, which can be a proxy for estimating GC pressure, and both SBCL and CCL report the actual time (if any) spent in GC. Why not complain that C++ benchmarks hide the indirect costs of memory fragmentation, which is a real bane for long-running C++ programs like application servers? But I'll admit that the GC can be a big weakness and it's no fun to be forced to code around its limitations, and historically some GCs used by some Lisps were really bad (that is, huge pause times). I've been looking at the herculean GC work being done in the Java ecosystem for years with a jealous eye, and even the newer Nim with its swappable GCs when you want to make certain tradeoffs without having to code them in.

  • Software drag racing
    1 project | /r/lisp | 12 Jul 2021
    Having beaten performance out of SBCL before, it seems...unlikely that there would be a benefit to fixnums when everything is inline and unboxed, except for isqrt, but AIUI we only compute it once per sieve so I'm not terribly bothered by that.
  • A SIMD-accelerated lock-free concurrent hash table.
    1 project | /r/Common_Lisp | 16 Mar 2021

numericals

Posts with mentions or reviews of numericals. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-08-02.
  • numericals - Performance of NumPy with the goodness of Common Lisp
    8 projects | /r/lisp | 2 Aug 2022
    How about the semantics? Nevermind, I looked -- utter nonsense, just like numpy.
  • Good Lisp libraries for math
    7 projects | /r/lisp | 21 May 2022
    Then there is a question - do you actually need these libraries? You can optimize code in Common Lisp (type declarations, usage of appropriate data structures, SIMD instructions etc). See this: https://github.com/digikar99/numericals/tree/master/sbcl-numericals <- SIMD instructions used from SBCL (on x86; these are processor-family specific so Apple M1 will have different ones).
  • Image classification in CL? Help with starting point
    8 projects | /r/Common_Lisp | 20 Sep 2021
    *I have not; I have a couple of WIP/alpha-stage libraries like dense-arrays and numericals that could be useful; once I find the time, I want to think about if these or its dependencies can be integrated into the existing libraries including antik mentioned by awesome-cl.
  • Machine Learning in Lisp
    12 projects | /r/lisp | 4 Jun 2021
    Personally, I've been relying on the stream-based method using py4cl/2, mostly because I did not - and perhaps do not - have the knowledge and time to dig into the CFFI based method. The limitation is that this would get you less than 10000 python interactions per second. That is sufficient if you will be running a long running python task - and I have successfully run trivial ML programs using it, but any intensive array processing gets in the way. For this later task, there are a few emerging libraries like numcl and array-operations without SIMD (yet), and numericals using SIMD. For reasons mentioned on the readme, I recently cooked up dense-arrays. This has interchangeable backends and can also use cl-cuda. But barring that, the developer overhead of actually setting up native-CFFI ecosystem is still too high, and I'm back to py4cl/2 for tasks beyond array processing.
  • polymorphic-functions - Possibly AOT dispatch on argument types with support for optional and keyword argument dispatch
    9 projects | /r/lisp | 21 May 2021
    I made this while running into code modularity issues with the numericals project I attempted last year; I did discover specialization-store, but found its goals in conflict with what I wanted to achieve; so I ended up investing in this.

What are some alternatives?

When comparing 42nd-at-threadmill and numericals you can also consider the following projects:

cl-trello-clone - A Trello clone demo app in Common Lisp

cl-cuda - Cl-cuda is a library to use NVIDIA CUDA in Common Lisp programs.

luckless - Lockless data structures for Common Lisp

py4cl - Call python from Common Lisp

concurrent-hash-tables - A "portability" library for concurrent hash tables in Common Lisp

py4cl2 - Call python from Common Lisp

specialization-store - A different type of generic function for common lisp.

Petalisp - Elegant High Performance Computing

dense-arrays - Numpy like array object for common lisp

specialized-function - Julia-like dispatch for Common Lisp

polymorphic-functions - A function type to dispatch on types instead of classes with partial support for dispatching on optional and keyword argument types.

numcl - Numpy clone in Common Lisp