-
This is a "tack on"-tool to the otherwise fully async nature of Mats/messaging.
> For this to really work well, the message passing has to be integrated with the CPU dispatcher
It sounds like you are 100% set on speed. This is not really what Mats is after - it is meant as a inter-service communcation system, and IO will be your limiting factor at any rate. Mats sacrifices a bit of speed for developer ergonomics - the idea is that by easily enabling fully async development of ISC in a complex microservice system, you gain back that potential loss from a) actually being able to use fully async processing (!), and b) the inherent speed of messaging (it is at least as fast as HTTP, and you avoid the overhead of HTTP headers etc.
It is mentioned here, "What Mats is not": https://github.com/centiservice/mats3#what-mats-is-not
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
ideas2
Discontinued Another 85+ Ideas for Computing https://samsquire.github.io/ideas2/ [GET https://api.github.com/repos/samsquire/ideas2: 404 - Not Found // See: https://docs.github.com/rest/repos/repos#get-a-repository]
Thanks for this.
I love the idea of breaking up a flow into separately scheduled but still linear message flow.
I wrote about a similar idea in ideas2
https://github.com/samsquire/ideas2#84-communication-code-sl...
The idea is that I enrich my code with comments and a transpiler schedules different parts of the code to different machines and inserts communication between blocks.
I read about how Zookeeper algorithm for transactionality and robustness to messages being dropped, which is interesting reading.
https://zookeeper.apache.org/doc/r3.4.13/zookeeperInternals....
How does Mats compare?
LMAX disruptor has a pattern where you split up each side of an IO request into two events, to avoid blocking in an handler. So you would always insert a new event to handle an IO response.
-
cadence
Cadence is a distributed, scalable, durable, and highly available orchestration engine to execute asynchronous long-running business logic in a scalable and resilient way.
Having done a reasonable amount of messaging code in my time, I would say the final form of this sort of thing might look more like Cadence[0] than anything like this.
[0] https://github.com/uber/cadence
-
Apache Camel
Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data. (by apache)
-
Interesting how this feature set is pretty much exactly the same as offered by SocketCluster https://socketcluster.io/
-
I made a very similar project in Rust that seems to mimic this idea: https://github.com/volfco/boxcar
The core idea I had was to decouple the connection from the execution of the RPC. Mats3 looks to be doing a lot more than what I've done so far, but it's nice to see similar ideas out there to take inspiration from.