A Cost Model for Nim

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • cligen

    Nim library to infer/generate command-line-interfaces / option / argument parsing; Docs at

  • Nim gives a bit more choice in many dimensions than many languages -- how to manage memory, whether to use the stdlib at all for things like hash tables, and yes, also syntactic choices like several ways to call a function. This can actually be convenient in constructing a DSL for something with minimal fuss. While `func arg1 arg2` might look weird in "real" "code", it might look great inside some DSL.

    There are also compile-time superpowers like macros that just receive a parsed AST. That can be used to "re-parse" or "re-compile" external code as in https://github.com/c-blake/cligen. So, trade-offs like in all of life.

    There is even a book called The Paradox Of Choice [1]. I think there is just a spectrum/distribution of human predisposition where some like to have things "standardized & packaged up for them" while others more like to invent their own rules..and enough variation within the population that people have to learn to agree to disagree more.

    I do feel like the syntax is far less chaotic than Perl.

    [1] https://en.wikipedia.org/wiki/The_Paradox_of_Choice

  • nitter

    Alternative Twitter front-end

  • A well-known and nice app that is build with Nim is [Nitter](https://github.com/zedeus/nitter), a free and open source alternative Twitter front-end focused on privacy and performance.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • ratel

  • which is this great project btw: https://github.com/PMunch/ratel

    what happened to that cool landing page you had?

  • tiny_sqlite

    A thin SQLite wrapper for Nim

  • I did some work on Nim's hash tables back in 2020, specifically with OrderedTable, comparable to a Python dict where insertion order is preserved. I stumbled on this table module in a roundabout way, via Nim's database module, db_sqlite. The db_sqlite module was much slower than Python for simple tests, and on investigation, I found that it didn't automatically handled prepared statement caching like Python's sqlite3 module. There were some other issues with db_sqlite, like blob handling and null handling, which led me to a different SQLite interface, tiny_sqlite. This was a big improvement, handling both nulls and blobs, and the developer was great to work with. But it also didn't support prepared statement caching. I filed an issue and he implemented it, using Nim's OrderedTable to simulate an LRU cache by adding a new prepared statement and deleting the oldest one if the cache was too big:

    https://github.com/GULPF/tiny_sqlite/issues/3

    Performance was hugely improved. There was another LRUCache implementation I played with, and when using that for the statement cache, performance was 25% faster than OrderedTable. That didn't make much sense to me for a 100-entry hash table, so I started running some tests comparing LRUCache and OrderedTable. What I discovered is that OrderedTable delete operations created an entirely new copy of the table, minus the entry being deleted, on every delete. That seemed pretty crazy, especially since it was already showing up as performance problems in a 100-entry table.

    The tiny_sqlite developer switched to LRUCache, and I did some work on the OrderedTable implementation to make deletes O(1) as expected with hash table operations:

    https://github.com/nim-lang/Nim/pull/14995

    After spending a lot of time on this, I finally gave up. The problems were:

    - the JSON implementation used OrderedTables and never did deletes. JSON benchmark performance was rather sacred, so changing OrderedTables to be slightly slower/larger (I used a doubly-linked list) was not desirable, even if it changed delete performance from O(n) to O(1)

    - the Nim compiler also used OrderedTables and never did deletes

    - Nim tables allowed multiple values for the same key (I did help get that deprecated).

    - alternatives were proposed by others that maintained insertion order until a deleted occurred, but then it could become unordered. That made no sense to me.

    The TLDR is, if you use Nim tables, don't use OrderedTable unless you can afford to make an copy of the table on every deleted.

    Current Nim OrderedTable delete code: https://github.com/nim-lang/Nim/blob/15bffc20ed8da26e68c88bb...

    Issue for db_sqlite not handling nulls, blobs, statement cache: https://github.com/nim-lang/Nim/issues/13559

  • Nim

    Nim is a statically typed compiled systems programming language. It combines successful concepts from mature languages like Python, Ada and Modula. Its design focuses on efficiency, expressiveness, and elegance (in that order of priority).

  • I did some work on Nim's hash tables back in 2020, specifically with OrderedTable, comparable to a Python dict where insertion order is preserved. I stumbled on this table module in a roundabout way, via Nim's database module, db_sqlite. The db_sqlite module was much slower than Python for simple tests, and on investigation, I found that it didn't automatically handled prepared statement caching like Python's sqlite3 module. There were some other issues with db_sqlite, like blob handling and null handling, which led me to a different SQLite interface, tiny_sqlite. This was a big improvement, handling both nulls and blobs, and the developer was great to work with. But it also didn't support prepared statement caching. I filed an issue and he implemented it, using Nim's OrderedTable to simulate an LRU cache by adding a new prepared statement and deleting the oldest one if the cache was too big:

    https://github.com/GULPF/tiny_sqlite/issues/3

    Performance was hugely improved. There was another LRUCache implementation I played with, and when using that for the statement cache, performance was 25% faster than OrderedTable. That didn't make much sense to me for a 100-entry hash table, so I started running some tests comparing LRUCache and OrderedTable. What I discovered is that OrderedTable delete operations created an entirely new copy of the table, minus the entry being deleted, on every delete. That seemed pretty crazy, especially since it was already showing up as performance problems in a 100-entry table.

    The tiny_sqlite developer switched to LRUCache, and I did some work on the OrderedTable implementation to make deletes O(1) as expected with hash table operations:

    https://github.com/nim-lang/Nim/pull/14995

    After spending a lot of time on this, I finally gave up. The problems were:

    - the JSON implementation used OrderedTables and never did deletes. JSON benchmark performance was rather sacred, so changing OrderedTables to be slightly slower/larger (I used a doubly-linked list) was not desirable, even if it changed delete performance from O(n) to O(1)

    - the Nim compiler also used OrderedTables and never did deletes

    - Nim tables allowed multiple values for the same key (I did help get that deprecated).

    - alternatives were proposed by others that maintained insertion order until a deleted occurred, but then it could become unordered. That made no sense to me.

    The TLDR is, if you use Nim tables, don't use OrderedTable unless you can afford to make an copy of the table on every deleted.

    Current Nim OrderedTable delete code: https://github.com/nim-lang/Nim/blob/15bffc20ed8da26e68c88bb...

    Issue for db_sqlite not handling nulls, blobs, statement cache: https://github.com/nim-lang/Nim/issues/13559

  • adix

    An Adaptive Index Library for Nim

  • which is notably logarithmic - not unlike a B-Tree.

    When these expectations are exceeded you can at least detect a DoS attack. If you wait until such are seen, you can activate a "more random" mitigation on the fly at about the same cost as "the next resize/re-org/whatnot".

    All you need to do is instrument your search to track the depth. There is some example such strategy in Nim at https://github.com/c-blake/adix for simple Robin-Hood Linear Probed tables.

  • nwaku

    Waku node and protocol.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • nimbus-eth2

    Nim implementation of the Ethereum Beacon Chain

  • bu

    B)asic|But-For U)tility Code/Programs (in Nim & Often Unix/POSIX/Linux Context)

  • I have not found this slower-than-C to be the case. You may need to use something like `iterator getDelims` in https://github.com/c-blake/cligen/blob/master/cligen/osUt.ni... to manage memory with fewer copies/less alloc "more like C", though or perhaps `std/memfiles` memory mapped IO.

    More pithy ways to put it are that "there are no speed limits" or "Nim responds to optimization effort about as well as other low level languages like C". You can deploy SIMD intrinsics, for example. In my experience, it's not that hard to "pay only for what you use".

    As a more concrete thing, I have timed (yes, on one CPU, one test case, etc..many caveats) the tokenizer used in https://github.com/c-blake/bu/blob/main/rp.nim to be faster than that used by the Rust xsv.

    Of course, you really shouldn't tokenize a lot if it's costly, but rather save a binary answer that does not need parsing (or perhaps more accurately is natively parsed by the CPU).

  • karax

    Karax. Single page applications for Nim.

  • > the real killer feature to me is the javascript target

    Agree, this is amazing because you can share code and data structures between front and backend (for example: https://github.com/karaxnim/karax).

    Also, it's really nice having high level stuff like metaprogramming and static typing spanning both targets. Things like reading a spec file and generating statically checked APIs for server/client is straightforward, which opens up a lot of possibilities.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts