differential-dataflow VS go-ds-crdt

Compare differential-dataflow vs go-ds-crdt and see what are their differences.

differential-dataflow

An implementation of differential dataflow using timely dataflow on Rust. (by TimelyDataflow)

go-ds-crdt

A distributed go-datastore implementation using Merkle-CRDTs. (by ipfs)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
differential-dataflow go-ds-crdt
14 7
2,473 360
0.8% 1.7%
8.3 6.1
7 days ago 3 months ago
Rust Go
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

differential-dataflow

Posts with mentions or reviews of differential-dataflow. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-21.
  • We Built a Streaming SQL Engine
    3 projects | news.ycombinator.com | 21 Oct 2023
    Some recent solutions to this problem include Differential Dataflow and Materialize. It would be neat if postgres adopted something similar for live-updating materialized views.

    https://github.com/timelydataflow/differential-dataflow

    https://materialize.com/

  • Hydroflow: Dataflow Runtime in Rust
    5 projects | news.ycombinator.com | 7 Jun 2023
    I'm looking for this but can't find it, how does this project compare to differential dataflow?

    As a sibling commenter mentioned, it's built on timely dataflow (which is lower-level), but that already has differential dataflow[0] built on top of it by the same authors.

    How do they differ?

    [0]: https://github.com/TimelyDataflow/differential-dataflow

  • Using Rust to write a Data Pipeline. Thoughts. Musings.
    5 projects | /r/rust | 14 Jan 2023
  • PlanetScale Boost
    6 projects | news.ycombinator.com | 15 Nov 2022
  • Program Synthesis is Possible (2018)
    3 projects | news.ycombinator.com | 4 Sep 2022
  • Convex vs. Firebase
    7 projects | news.ycombinator.com | 21 Jun 2022
    hi! sujay from convex here. I remember reading about your "reverse query engine" when we were getting started last year and really liking that framing of the broadcast problem here.

    as james mentions, we entirely re-run the javascript function whenever we detect any of its inputs change. incrementality at this layer would be very difficult, since we're dealing with a general purpose programming language. also, since we fully sandbox and determinize these javascript "queries," the majority of the cost is in accessing the database.

    eventually, I'd like to explore "reverse query execution" on the boundary between javascript and the underlying data using an approach like differential dataflow [1]. the materialize folks [2] have made a lot of progress applying it for OLAP and readyset [3] is using similar techniques for OLTP.

    [1] https://github.com/TimelyDataflow/differential-dataflow

    [2] https://materialize.com/

    [3] https://readyset.io/

  • Announcing avalanche 0.1, a React- and Svelte-inspired GUI library
    6 projects | /r/rust | 30 Dec 2021
    differential dataflow which is used to power materialize db
  • Differential Datalog
    7 projects | news.ycombinator.com | 19 Mar 2021
    It's partially inspired by Linq, so the similarity you see is expected.

    It's not really arbitrary structures so much, though you're mostly free in what record type you use in a relation (structs and tagged enums are typical, though).

    The incremental part is that you can feed it changes to the input (additions/retractions of facts) and get changes to the outputs back with low latency (you can alternatively just use it to keep an index up-to-date, where you can quickly look up based on a key (like a materialized view in SQL)).

    This [0] section in the readme of the underlying incremental dataflow framework may help get the concept across, but feel free to follow up if you're still not seeing the incrementality.

    [0]: https://github.com/TimelyDataflow/differential-dataflow#an-e...

  • Dbt and Materialize
    3 projects | news.ycombinator.com | 1 Mar 2021
  • Materialized view questions
    1 project | /r/mit6824clojure | 28 Feb 2021

go-ds-crdt

Posts with mentions or reviews of go-ds-crdt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-25.
  • CRDTs Turned Inside Out
    2 projects | news.ycombinator.com | 25 Jan 2024
    I forgot: key-value store using MD-CRDTs was implemented here: https://github.com/ipfs/go-ds-crdt

    The trickiest part was not the CRDT, but the DAG traversal with multiple workers processing parallel updates on multiple branches and switching CRDT-DAG roots as they finish branches.

  • We Put IPFS in Brave
    4 projects | news.ycombinator.com | 24 May 2022
    In https://github.com/ipfs/go-ds-crdt, every node in the Merkle DAG has a "Priority" field. When adding a new head, this is set to (maximum of the priorities of the children)+1.

    Thus, this priority represents the current depth (or height) of the DAG at each node. It is sort of a timestamp and you could use a timestamp, or whatever helps you sort. In the case of concurrent writes, the write with highest priority wins. If we have concurrent writes of same priority, then things are sorted by CID.

    The idea here is that in general, a node that is lagging behind or not syncing would have a dag with less depth, therefore its writes would have less priority when they conflict with writes from others that have built deeper DAGs. But this is after all an implementation choice, and the fact that a DAG is deeper does not mean that the last write on a key happened "later".

  • Making CRDTs Byzantine Fault Tolerant [pdf]
    3 projects | news.ycombinator.com | 4 Mar 2022
    The idea of DAG-embedded CRDTs is far from new and was introduced here:

    https://arxiv.org/abs/2004.00107 (I'm among the authors)

    Unfortunately, the verification that the author proposes (not accepting new updates until the dag below is verified) will need a lot of caveats for real world usage.

    Currently we use these CRDTs for a key value database of 40M+ keys in a deployment of ipfs-cluster, which uses https://github.com/ipfs/go-ds-crdt .

  • Ask HN: P2P Databases?
    3 projects | news.ycombinator.com | 1 Mar 2022
  • Go-ds-CRDT: distributed datastore using Merkle-CRDTs
    1 project | news.ycombinator.com | 28 Oct 2021
  • Conflict-free replicated datatypes solve distributed data consistency challenges
    2 projects | news.ycombinator.com | 28 Oct 2021
  • Data Laced with History: Causal Trees and Operational CRDTs (2018)
    2 projects | news.ycombinator.com | 14 Feb 2021
    Not 100% the thing, but potentially related work in this area:

    https://github.com/ipfs/go-ds-crdt

    (See link to paper, and links to other projects in it, like OrbitDB).

What are some alternatives?

When comparing differential-dataflow and go-ds-crdt you can also consider the following projects:

ballista - Distributed compute platform implemented in Rust, and powered by Apache Arrow.

merkle-crdt - Merkle-Clock CRDT implementation in python

materialize - The data warehouse for operational workloads.

verneuil - Verneuil is a VFS extension for SQLite that asynchronously replicates databases to S3-compatible blob stores.

reflow - A language and runtime for distributed, incremental data processing in the cloud

yjs - Shared data types for building collaborative software

differential-datalog - DDlog is a programming language for incremental computation. It is well suited for writing programs that continuously update their output in response to input changes. A DDlog programmer does not write incremental algorithms; instead they specify the desired input-output mapping in a declarative manner.

Apache Ignite - Apache Ignite

timely-dataflow - A modular implementation of timely dataflow in Rust

yata - YATA based algorithm for plain text CRDT edit merging in python

clj-3df - Clojure(Script) client for Declarative Dataflow.

crdt-study - A Python study of distributed, conflict-free Last-Writer-Wins (LWW) undirected graphs