diagnostics VS differential-dataflow

Compare diagnostics vs differential-dataflow and see what are their differences.

diagnostics

Diagnostic tools for timely dataflow computations (by TimelyDataflow)

differential-dataflow

An implementation of differential dataflow using timely dataflow on Rust. (by TimelyDataflow)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
diagnostics differential-dataflow
1 14
41 2,473
- 0.8%
0.0 8.3
almost 2 years ago 8 days ago
Rust Rust
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

diagnostics

Posts with mentions or reviews of diagnostics. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-01-22.
  • Why isn't differential dataflow more popular?
    13 projects | news.ycombinator.com | 22 Jan 2021
    I've been using DD in production usage for just over a year now for low latency(sub second from event IRL to pipeline CDC output) processing in a geo-distributed environment(100's of locations globally coordinating) some days at the TB per day level of event ingest.

    DD for me was one of the final attempts to find something, anything, that could handle the requirements I was working with, because Spark, Flink, and others just couldn't reasonably get close to what I was looking for. The closest 2nd place was Apache Flink.

    Over the last year I've read through the DD and TD codebases about 5-7 times fully. Even with that, I'm often in a position where I go back to my own applications to see how I had already solved a type of problem. I liken the project to taking someone use to NASCAR and dropping them into a Formula One vehicle. You've seen it work so much faster, and the tech and capabilities are clearly designed for so much more than you can make it do right now.

    A few learning examples that I consider funny:

    1. I had a graph that was on the order of about 1.2 trillion edges with about 90 million nodes. I was using serde derived structs for the edge and node structs(not simplified numerical types), which means I have to implement(or derive) a bunch of traits myself. I spent way more time than I'd like to admit trying to get .reduce() to work to remove 'surplus' edges that have already been processed from the graph to shrink the working dataset. Finally in frustration and reading through the DD codebase again, I 'rediscovered' .consolidate() which 'just worked' taking the 1.2 trillion edges down into the 300 million edges. For instance, some of the edge values I need to work with have histograms for the distributions, and some of the scoring of those histograms is custom. Not usually an issue, except having to figure out how to implement a bunch of the traits has been a significant hurdle.

    2. I get to constantly dance between DD's runtime and trying to ergonomically connect the application into the tonic gRPC and tokio interfaces. Luckily I've found a nice pattern where I create my inter-thread communication constructs, then start up 2 rust threads, and start tokio based interfaces in one, and DD runtime and workers in the other. On bigger servers(packet.net has some great gen3 instances) I usually pin tokio to 2-8 cores, and leave the rest of the cores to DD.

    3. Almost every new app I start, I run into the gotcha where I want to have a worker that runs only once 'globally' and it's usually the thread that I'd want to use to coordinate data ingestion. Super simple to just have a guard for if worker.index() == 0, but when deep in thought about an upcoming pipeline, it's often forgotten.

    4. For diagnostics, there is: https://github.com/TimelyDataflow/diagnostics which has provided much needed insights when things have gotten complex. Usually it's been 'just enough' to point into the right direction, but only once was the output able to point exactly to the issue I was running into.

    5. I have really high hopes for materialize.io That's really the type of system I'd want to use in 80% of the cases I'm using DD right now. I've been following them for about a year now, and the progress is incredible, but my use cases seem more likely to be supported in the 0.8->1.3 roadmap range.

    6. I've wanted to have a way to express 'use no more than 250GB of ram' and have some way to get a compile time feedback that a fixed dataset won't be able to process the pipeline with that much resources. It'd be far better if the system could adjust its internal runtime approach in order to stay within the limits.

differential-dataflow

Posts with mentions or reviews of differential-dataflow. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-21.
  • We Built a Streaming SQL Engine
    3 projects | news.ycombinator.com | 21 Oct 2023
    Some recent solutions to this problem include Differential Dataflow and Materialize. It would be neat if postgres adopted something similar for live-updating materialized views.

    https://github.com/timelydataflow/differential-dataflow

    https://materialize.com/

  • Hydroflow: Dataflow Runtime in Rust
    5 projects | news.ycombinator.com | 7 Jun 2023
    I'm looking for this but can't find it, how does this project compare to differential dataflow?

    As a sibling commenter mentioned, it's built on timely dataflow (which is lower-level), but that already has differential dataflow[0] built on top of it by the same authors.

    How do they differ?

    [0]: https://github.com/TimelyDataflow/differential-dataflow

  • Using Rust to write a Data Pipeline. Thoughts. Musings.
    5 projects | /r/rust | 14 Jan 2023
  • PlanetScale Boost
    6 projects | news.ycombinator.com | 15 Nov 2022
  • Program Synthesis is Possible (2018)
    3 projects | news.ycombinator.com | 4 Sep 2022
  • Convex vs. Firebase
    7 projects | news.ycombinator.com | 21 Jun 2022
    hi! sujay from convex here. I remember reading about your "reverse query engine" when we were getting started last year and really liking that framing of the broadcast problem here.

    as james mentions, we entirely re-run the javascript function whenever we detect any of its inputs change. incrementality at this layer would be very difficult, since we're dealing with a general purpose programming language. also, since we fully sandbox and determinize these javascript "queries," the majority of the cost is in accessing the database.

    eventually, I'd like to explore "reverse query execution" on the boundary between javascript and the underlying data using an approach like differential dataflow [1]. the materialize folks [2] have made a lot of progress applying it for OLAP and readyset [3] is using similar techniques for OLTP.

    [1] https://github.com/TimelyDataflow/differential-dataflow

    [2] https://materialize.com/

    [3] https://readyset.io/

  • Announcing avalanche 0.1, a React- and Svelte-inspired GUI library
    6 projects | /r/rust | 30 Dec 2021
    differential dataflow which is used to power materialize db
  • Differential Datalog
    7 projects | news.ycombinator.com | 19 Mar 2021
    It's partially inspired by Linq, so the similarity you see is expected.

    It's not really arbitrary structures so much, though you're mostly free in what record type you use in a relation (structs and tagged enums are typical, though).

    The incremental part is that you can feed it changes to the input (additions/retractions of facts) and get changes to the outputs back with low latency (you can alternatively just use it to keep an index up-to-date, where you can quickly look up based on a key (like a materialized view in SQL)).

    This [0] section in the readme of the underlying incremental dataflow framework may help get the concept across, but feel free to follow up if you're still not seeing the incrementality.

    [0]: https://github.com/TimelyDataflow/differential-dataflow#an-e...

  • Dbt and Materialize
    3 projects | news.ycombinator.com | 1 Mar 2021
  • Materialized view questions
    1 project | /r/mit6824clojure | 28 Feb 2021

What are some alternatives?

When comparing diagnostics and differential-dataflow you can also consider the following projects:

differential-datalog - DDlog is a programming language for incremental computation. It is well suited for writing programs that continuously update their output in response to input changes. A DDlog programmer does not write incremental algorithms; instead they specify the desired input-output mapping in a declarative manner.

ballista - Distributed compute platform implemented in Rust, and powered by Apache Arrow.

timely-dataflow - A modular implementation of timely dataflow in Rust

materialize - The data warehouse for operational workloads.

rslint - A (WIP) Extremely fast JavaScript and TypeScript linter and Rust crate

reflow - A language and runtime for distributed, incremental data processing in the cloud

sliding-window-aggregators - Reference implementations of sliding window aggregation algorithms

blog - Some notes on things I find interesting and important.

lambdo - Feature engineering and machine learning: together at last!

clj-3df - Clojure(Script) client for Declarative Dataflow.