Programming language comparison by reimplementing the same transit data app

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • transit-lang-cmp

    Programming language comparison by reimplementing the same transit data app

  • This is great! Just pushed up a commit that uses it and updated the benchmarks[0]. I'm seeing a 1.6X - 2X improvement in overall performance. Not bad for a drop-in replacement. And since it's based on serde, I trust it, and I feel like trying out a different JSON library is within scope for me of not just "gaming the benchmarks", as this is actually something I'd now consider using at work.

    It's not quite as high as I was seeing with `jiffy` (3,800 req/sec here vs 4,000+ with jiffy), but I'm not confident that was a totally fair comparison. `jiffy` doesn't integrate as nicely with Phoenix, so I was just calling `:jiffy.encode(...)` in the controller and then doing a `text(...)` response. I need to double-check if `json(...)` is doing more work here.

    [0] https://github.com/losvedir/transit-lang-cmp/commit/140d693b...

  • tapir

    Declarative, type-safe web endpoints library

  • I do wonder where the recommendation to use http4s for beginners came from. http4s is a very capable library (and if you care much about composition it is excellent), but I wouldn't describe the documentation as beginner friendly.

    A slightly better starting point for scala 3 + type-safe server building is tapir e.g. https://github.com/softwaremill/tapir/blob/master/examples3/... . With that, you get a declarative definition of your endpoints (+ error types, auth, etc.) that you can use for both servers and clients, which comes very handy when writing integration tests of course.

    > absolutely ridiculous the fetishization of extremely complex FP and type-level hacking that goes on in the ecosystem

    An alternative way to look at it is that there is a lot of essential domain complexity that gets encoded via the type system to let the compiler do the hard work. That "extremely complex FP" does not arrive out of nowhere - I really recommend at least skimming through the slides from rossabaker, the http4s designer, that motivate where the core type signature comes from https://rossabaker.github.io/boston-http4s/#2

    I suppose one of the "features" that I like about the (typelevel) community is that the approach of "worse is better" is not taken, and a lot of effort is expended to make things correct, modular and orthogonal. This has the drawback of increased upfront complexity, that anecdotally pays off the moment your compiler does not error and the program runs as intended.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • realworld

    "The mother of all demo apps" — Exemplary fullstack Medium.com clone powered by React, Angular, Node, Django, and many more

  • For a similar project, someone put together a spec for a basic clone of Medium split into a frontend design and backend API, and now has over 100 different components where you can pair any frontend with any backend (plus a few fullstack implementations).

    https://github.com/gothinkster/realworld

    Not as performance-focused with benchmarks, but a good point of comparison for various languages and frameworks implementing common behavior.

  • scotty

    Haskell web framework inspired by Ruby's Sinatra, using WAI and Warp (Official Repository)

  • deno_std

    Discontinued deno standard modules

  • This was fun to read through.

    I would need to profile the code, but the startup time being bad for Deno seems like maybe a combination of the code in here being unoptimized:

    https://github.com/denoland/deno_std/blob/0ce558fec1a1beeda3...

    (Ex. Lots of temporaries)

    And usage of the readFileSync+TextDecoder API instead of readTextFile (which is also a docs issue since it's suggests the first one). It seems the code loads the 100MB into memory, then converts to another 100MB of utf8, then parses with that inefficient csv decoder. The rust and go versions look to be doing stream/incremental processing instead.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts