-
PurefunctionPipelineDataflow
My Blog: The Math-based Grand Unified Programming Theory: The Pure Function Pipeline Data Flow with principle-based Warehouse/Workshop Model
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
concurrencpp
Modern concurrency for C++. Tasks, executors, timers and C++20 coroutines to rule them all
-
Vrmac
Vrmac Graphics, a cross-platform graphics library for .NET. Supports 3D, 2D, and accelerated video playback. Works on Windows 10 and Raspberry Pi4.
It's ultimate solution:
The Grand Unified Programming Theory: The Pure Function Pipeline Data Flow with Principle-based Warehouse/Workshop Model
Apple M1 chip is the best case.
https://github.com/linpengcheng/PurefunctionPipelineDataflow
I see a lot of potential in pipeline concurrency, as seen in dataflow (DF) and flow-based programming (FBP). That is, modeling computation as pipelines where one component sends data to the next component via message passing. As long as there is enough data it will be possible for multiple components in the chain to work concurrently.
The benefits are that no other synchronization is needed than the data sent between processes, and race conditions are ruled out as long as only one process is allowed to process a data item at a time (this is the rule in FBP).
The main blockers I think is that it requires quite a rethink of the architecture of software. I see this rethink happening in larger, especially distributed systems, which are modeled a lot around these principles already, using systems such as Kafka and message queues to communicate, which more or less forces people to model computations around the data flow.
I think the same could happen inside monolithic applications too, with the right tooling. The concurrency primitives in Go are superbly suited to this in my experience, given that you work with the right paradigm, which I've been writing about before [1, 2], and started making a micro-unframework for [3] (though the latter one will be possible to make so much nicer after we get generics in Go).
But then, I also think there are some lessons to be learned about the right granularity for processes and data in the pipeline. Due to the overhead of message passing, it will not make sense performance-wise to use dataflow for the very finest-grain data.
Perhaps this in a sense parallels what we see with distributed computing, where there is a certain breaking point before which it isn't really worth it to go with distributed computing, because of all the overhead, both performance-wise and complexity-wise.
[1] https://blog.gopheracademy.com/composable-pipelines-pattern/
[2] https://blog.gopheracademy.com/advent-2015/composable-pipeli...
[3] https://flowbase.org
> my point is that the GPU really is not the "first-class citizen" that the CPU is.
On Windows GPU is the first-class citizen since Vista. In Vista, Microsoft started to use D3D10 for their desktop compositor. In Windows 7 they have upgraded to Direct3D 11.
The transition wasn’t smooth. Only gamers had 3D GPUs before Vista, many people needed new computers. Technically, Microsoft had to change driver model to support a few required features.
On the bright side, now that XP->Win7 transition is long in the past, and 3D GPUs are used for everything on Windows. All web browsers are using D3D to render stuff, albeit not directly, through the higher-level libraries like Direct2D and DirectWrite.
Linux doesn’t even have these higher-level libraries. They are possible to implement on top of whichever GPU API is available https://github.com/Const-me/Vrmac#vector-graphics-engine but so far nobody did it well enough.
It’s very similar situation with GPU compute on Linux. The kernel and driver support has arrived by now, but the higher-level user mode things are still missing.
Related posts
-
Concurrencpp – a C++20 library for coroutines and executors
-
Comparing asio to unifex
-
Do you think the current asynchronous models (executors, senders) are too complicated and really we just need channels and coroutines running on a thread pool?
-
concurrencpp version 0.1.6 has been released!
-
What happens if you co_await a std::future, and why is it a bad idea? - The Old New Thing