Why messaging is much better than REST for inter-microservice communications

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • mats3

    Mats3: Message-based Asynchronous Transactional Staged Stateless Services

    This is a "tack on"-tool to the otherwise fully async nature of Mats/messaging.

    > For this to really work well, the message passing has to be integrated with the CPU dispatcher

    It sounds like you are 100% set on speed. This is not really what Mats is after - it is meant as a inter-service communcation system, and IO will be your limiting factor at any rate. Mats sacrifices a bit of speed for developer ergonomics - the idea is that by easily enabling fully async development of ISC in a complex microservice system, you gain back that potential loss from a) actually being able to use fully async processing (!), and b) the inherent speed of messaging (it is at least as fast as HTTP, and you avoid the overhead of HTTP headers etc.

    It is mentioned here, "What Mats is not": https://github.com/centiservice/mats3#what-mats-is-not

  • ideas2

    Another 85+ Ideas for Computing https://samsquire.github.io/ideas2/

    Thanks for this.

    I love the idea of breaking up a flow into separately scheduled but still linear message flow.

    I wrote about a similar idea in ideas2

    https://github.com/samsquire/ideas2#84-communication-code-sl...

    The idea is that I enrich my code with comments and a transpiler schedules different parts of the code to different machines and inserts communication between blocks.

    I read about how Zookeeper algorithm for transactionality and robustness to messages being dropped, which is interesting reading.

    https://zookeeper.apache.org/doc/r3.4.13/zookeeperInternals....

    How does Mats compare?

    LMAX disruptor has a pattern where you split up each side of an IO request into two events, to avoid blocking in an handler. So you would always insert a new event to handle an IO response.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

  • cadence

    Cadence is a distributed, scalable, durable, and highly available orchestration engine to execute asynchronous long-running business logic in a scalable and resilient way.

    Having done a reasonable amount of messaging code in my time, I would say the final form of this sort of thing might look more like Cadence[0] than anything like this.

    [0] https://github.com/uber/cadence

  • Apache Camel

    Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data.

    This reminds me more of Apache Camel[0] than other things it's being compared to.

    > The process initiator puts a message on a queue, and another processor picks that up (probably on a different service, on a different host, and in different code base) - does some processing, and puts its (intermediate) result on another queue

    This is almost exactly the definition of message routing (ie: Camel).

    I'm a bit doubtful about the pitch because the solution is presented as enabling you to maintain synchronous style programming while achieving benefits of async processing. This just isn't true, these are fundamental tradeoffs. If you need a synchronous answer back then no amount of queuing, routing, prioritisation, etc etc will save you when the fundamental resource providing that is unavailable, and the ultimate outcome that your synchronous client now hangs indefinitely waiting for a reply message instead of erroring hard and fast is not desirable at all. If you go into this ad hoc, and build in a leaky abstraction that asynchronous things are are actually synchronous and vice versa, before you know it you are going to have unstable behaviour or even worse, deadlocks all over your system and the worst part - the true state of the system is now hidden in which messages are pending in transient message queues everywhere.

    What really matters here is to fundamentally design things from the start with patterns that allow you to be very explicit about what needs to be synchronous vs async (building on principles of idempotency, immutability, coherence, to maximise the cases where async is the answer).

    The notion of Apache Camel is to make all these decisions a first class elements of your framework and then to extract out the routing layer as a dedicated construct. The fact it generalises beyond message queues (treating literally anything that can provide a piece of data as a message provider) is a bonus.

    [0] https://camel.apache.org/

  • SocketCluster

    Highly scalable realtime pub/sub and RPC framework

    Interesting how this feature set is pretty much exactly the same as offered by SocketCluster https://socketcluster.io/

  • boxcar

    Boxcar RPC

    I made a very similar project in Rust that seems to mimic this idea: https://github.com/volfco/boxcar

    The core idea I had was to decouple the connection from the execution of the RPC. Mats3 looks to be doing a lot more than what I've done so far, but it's nice to see similar ideas out there to take inspiration from.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts