Why isn't differential dataflow more popular?

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • lambdo

    Feature engineering and machine learning: together at last!

  • Having the possibility to update (query) output with new input data rather than process the whole input again even if the changes are very small is indeed a very useful feature. Assume that you have one huge input table and you computed the result consisting of a few rows. Now you add 1 record to the input. A traditional data processing system will again process all the input records while the differential system will update the existing output result.

    There are the following difficulties in implementing such systems:

    o (Small) changes in input have to be incrementally propagated to the output as updates rather than new results. This changes the paradigm of data processing because now any new operator has to be "update-aware"

    o Only simple operators can be easily implemented as "update-aware". For more complex operators like aggregation or rolling aggregations, it is frequently not clear how it can be done conceptually

    o Differential updates have to be propagated through a graph of operations (topology) which makes the task more difficult.

    o Currently popular data processing approaches (SQL or map-reduce) were not designed for such a scenario so some adaptation might be needed

    Another system where such an approach was implemented, called incremental evaluation, is Lambdo:

    https://github.com/asavinov/lambdo - Feature engineering and machine learning: together at last!

    Yet, this Python library relies on a different novel data processing paradigm where operations are applied to columns. Mathematically, it uses two types of operations: set operations and functions operations, as opposed to traditional approaches based on only set operations.

    A new implementation is here:

    https://github.com/asavinov/prosto - Functions matter! No join-groupby, No map-reduce.

    Yet, incremental evaluation is implemented only for simple operations (calculated columns).

  • blog

    Some notes on things I find interesting and important. (by frankmcsherry)

  • Importantly, this doesn't just use memoization (it actually avoids having to spend memory on that), but rather uses operators (nodes in the dataflow graph) that directly work with `(time, data, delta)` tuples. The `time` is a general lattice, so fairly flexible (e.g. for expressing loop nesting/recursive computations, but also for handling multiple input sources with their own timestamps), and the `delta` type is between a (potentially commutative) semigroup (don't be confused, they use addition as the group operation) and an abelian group. E.g. collections that are iteratively refined in loops often need an abelian `delta` type, while monoids (semigroup + explicit zero element) allow for efficient append-only computations [0].

    [0]: https://github.com/frankmcsherry/blog/blob/master/posts/2019...

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • reflow

    A language and runtime for distributed, incremental data processing in the cloud

  • It seems Reflow falls in this category:

    https://github.com/grailbio/reflow

    > Reflow thus allows scientists and engineers to write straightforward programs and then have them transparently executed in a cloud environment. Programs are automatically parallelized and distributed across multiple machines, and redundant computations (even across runs and users) are eliminated by its memoization cache. Reflow evaluates its programs incrementally: whenever the input data or program changes, only those outputs that depend on the changed data or code are recomputed.

  • differential-dataflow

    An implementation of differential dataflow using timely dataflow on Rust.

  • rslint

    A (WIP) Extremely fast JavaScript and TypeScript linter and Rust crate

  • differential-datalog

    DDlog is a programming language for incremental computation. It is well suited for writing programs that continuously update their output in response to input changes. A DDlog programmer does not write incremental algorithms; instead they specify the desired input-output mapping in a declarative manner.

  • The good news: we're making nice progress on profiling tools and I might get to trying some WCOJ code later today.

    [0]: https://github.com/vmware/differential-datalog

  • Hydra

    Functional hybrid modelling (FHM) language for modelling and simulation of physical systems using implicitly formulated (undirected) Differential Algebraic Equations (DAEs) (by giorgidze)

  • I think there are a lot of similarly interesting paradigms that goes mostly unnoticed because of a lack of explanation and simple to use api's.

    My personal favorite is "Functional hybrid modelling" - https://github.com/giorgidze/Hydra

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • diagnostics

    Diagnostic tools for timely dataflow computations (by TimelyDataflow)

  • I've been using DD in production usage for just over a year now for low latency(sub second from event IRL to pipeline CDC output) processing in a geo-distributed environment(100's of locations globally coordinating) some days at the TB per day level of event ingest.

    DD for me was one of the final attempts to find something, anything, that could handle the requirements I was working with, because Spark, Flink, and others just couldn't reasonably get close to what I was looking for. The closest 2nd place was Apache Flink.

    Over the last year I've read through the DD and TD codebases about 5-7 times fully. Even with that, I'm often in a position where I go back to my own applications to see how I had already solved a type of problem. I liken the project to taking someone use to NASCAR and dropping them into a Formula One vehicle. You've seen it work so much faster, and the tech and capabilities are clearly designed for so much more than you can make it do right now.

    A few learning examples that I consider funny:

    1. I had a graph that was on the order of about 1.2 trillion edges with about 90 million nodes. I was using serde derived structs for the edge and node structs(not simplified numerical types), which means I have to implement(or derive) a bunch of traits myself. I spent way more time than I'd like to admit trying to get .reduce() to work to remove 'surplus' edges that have already been processed from the graph to shrink the working dataset. Finally in frustration and reading through the DD codebase again, I 'rediscovered' .consolidate() which 'just worked' taking the 1.2 trillion edges down into the 300 million edges. For instance, some of the edge values I need to work with have histograms for the distributions, and some of the scoring of those histograms is custom. Not usually an issue, except having to figure out how to implement a bunch of the traits has been a significant hurdle.

    2. I get to constantly dance between DD's runtime and trying to ergonomically connect the application into the tonic gRPC and tokio interfaces. Luckily I've found a nice pattern where I create my inter-thread communication constructs, then start up 2 rust threads, and start tokio based interfaces in one, and DD runtime and workers in the other. On bigger servers(packet.net has some great gen3 instances) I usually pin tokio to 2-8 cores, and leave the rest of the cores to DD.

    3. Almost every new app I start, I run into the gotcha where I want to have a worker that runs only once 'globally' and it's usually the thread that I'd want to use to coordinate data ingestion. Super simple to just have a guard for if worker.index() == 0, but when deep in thought about an upcoming pipeline, it's often forgotten.

    4. For diagnostics, there is: https://github.com/TimelyDataflow/diagnostics which has provided much needed insights when things have gotten complex. Usually it's been 'just enough' to point into the right direction, but only once was the output able to point exactly to the issue I was running into.

    5. I have really high hopes for materialize.io That's really the type of system I'd want to use in 80% of the cases I'm using DD right now. I've been following them for about a year now, and the progress is incredible, but my use cases seem more likely to be supported in the 0.8->1.3 roadmap range.

    6. I've wanted to have a way to express 'use no more than 250GB of ram' and have some way to get a compile time feedback that a fixed dataset won't be able to process the pipeline with that much resources. It'd be far better if the system could adjust its internal runtime approach in order to stay within the limits.

  • timely-dataflow

    A modular implementation of timely dataflow in Rust

  • ballista

    Discontinued Distributed compute platform implemented in Rust, and powered by Apache Arrow.

  • I've looked at this and thought it looked amazing, but also haven't used it for anything. Some thoughts...

    Rust is a blessing and curse. I seems like the obvious choice for data pipelines, but everything big currently exists in Java and the small stuff is in Javascript, Python or R. Maybe this will slowly change, but it's a big ship to turn. I'm hopeful that tools like this and Balista [1] will eventually get things moving.

    Since the Rust community is relatively small, language bindings would be very helpful. Being able to configure pipelines from Java or Typescript(!) would be great.

    Or maybe it's just that this form of computation is too foreign. By the time you need it, the project is so large that it's too late to redesign it to use it. I'm also unclear on how it would handle changing requirements and recomputing new aggregations over old data. Better docs with more convincing examples would be helpful here. The GitHub page showing counting isn't very compelling.

    [1] https://github.com/ballista-compute/ballista

  • sliding-window-aggregators

    Reference implementations of sliding window aggregation algorithms

  • Myself and a few others have done a lot of research on performing sliding window aggregations updates without recomputing everything. Our code is on github, and the README has links to the papers: https://github.com/IBM/sliding-window-aggregators

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts