noria
timely-dataflow
Our great sponsors
noria | timely-dataflow | |
---|---|---|
26 | 11 | |
4,874 | 3,145 | |
0.0% | 1.1% | |
0.0 | 7.2 | |
over 2 years ago | 23 days ago | |
Rust | Rust | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
noria
-
Relational is more than SQL
> Automatically managed, application-transparent, physical denormalisation entirely managed by the database is something I am very, very interested in.
Sounds a bit like Noria: https://github.com/mit-pdos/noria
-
JetBrains Noria
It feels more than a little bit coincidental to call it Noria when https://github.com/mit-pdos/noria exists (and has been posted about here on HN)... especially with the whole bit about incrementally computing changes.
-
Uplevel database development with DataSQRL: A compiler for the data layer
Is this similar in spirit to Noria?
https://github.com/mit-pdos/noria
-
Dozer: A scalable Real-Time Data APIs backend written in Rust
I assume you have studied Noria? https://github.com/mit-pdos/noria
-
What are the Rust databases and their benefits?
If you want to look how databases are implemented in rust try https://github.com/mit-pdos/noria
- Materialized View: SQL Queries on Steroids
-
Measuring how much Rust's bounds checking actually costs
Only tangentially related, but I wondered what were the difference between ReadySet and Noria, and they address this exact question in their repository I'm really glad to know that the ideas behind Noria didn't die when Noria was abandoned after /u/jonhoo graduated.
-
PlanetScale Boost serves your SQL queries instantly
:wave: Author of the paper this work is based on here.
I'm so excited to see dynamic, partially-stateful data-flow for incremental materialized view maintenance becoming more wide-spread! I continue to think it's a _great_ idea, and the speed-ups (and complexity reduction) it can yield are pretty immense, so seeing more folks building on the idea makes me very happy.
The PlanetScale blog post references my original "Noria" OSDI paper (https://pdos.csail.mit.edu/papers/noria:osdi18.pdf), but I'd actually recommend my PhD thesis instead (https://jon.thesquareplanet.com/papers/phd-thesis.pdf), as it goes much deeper about some of the technical challenges and solutions involved. It also has a chapter (Appendix A) that covers how it all works by analogy, which the less-technical among the audience may appreciate :) A recording of my thesis defense on this, which may be more digestible than the thesis itself, is also online at https://www.youtube.com/watch?v=GctxvSPIfr8, as well as a shorter talk from a few years earlier at https://www.youtube.com/watch?v=s19G6n0UjsM. And the Noria research prototype (written in Rust) is on GitHub: https://github.com/mit-pdos/noria.
As others have already mentioned in the comments, I co-founded ReadySet (https://readyset.io/) shortly after graduating specifically to build off of Noria, and they're doing amazing work to provide these kinds of speed-ups for general-purpose relational databases. If you're using one of those, it's worth giving ReadySet a look to get these kinds of speedups there! It's also source-available @ https://github.com/readysettech/readyset if you're curious.
-
PlanetScale Boost
It seems similar to MIT's Noria [1]
> Noria is a new streaming data-flow system designed to act as a fast storage backend for read-heavy web applications based on Jon Gjengset's Phd Thesis, as well as this paper from OSDI'18. It acts like a database, but precomputes and caches relational query results so that reads are blazingly fast. Noria automatically keeps cached results up-to-date as the underlying data, stored in persistent base tables, change. Noria uses partially-stateful data-flow to reduce memory overhead, and supports dynamic, runtime data-flow and query change.
[1] https://github.com/mit-pdos/noria
-
OctoSQL allows you to join data from different sources using SQL
Materialize is really neat, also checkout https://github.com/mit-pdos/noria. It inverts the query problem and processes the data on insert. Exactly like what most applications end up doing using a no-sql solution.
timely-dataflow
-
Readyset: A MySQL and Postgres wire-compatible caching layer
They have a bit about their technical foundation here[0].
Given that Readyset was co-founded by Jon Gjengset (but has apparently since departed the company), who authored the paper on Noria[1], I would assume that Readyset is the continuation of that research.
So it shares some roots with Materialize. They have a common conceptual ancestry in Naiad, where Materialize evolved out of timely-dataflow.
[0]: https://docs.readyset.io/concepts/streaming-dataflow
[1]: https://jon.thesquareplanet.com/papers/osdi18-noria.pdf
[2]: https://dl.acm.org/doi/10.1145/2517349.2522738
[3]: https://github.com/TimelyDataflow/timely-dataflow
-
Mandala: experiment data management as a built-in (Python) language feature
And systems like timely dataflow, https://github.com/TimelyDataflow/timely-dataflow
-
Arroyo: A distributed stream processing engine written in Rust
Project looks cool! Glad you open sourced it. It could use some comments in the code base to help contributors ;). I also like the datafusion usage, that is awesome. BTW I work on github.com/bytewax/bytewax, which is based on https://github.com/TimelyDataflow/timely-dataflow another Rust dataflow computation engine.
-
Rust MPI -- Will there ever be a fully oxidized implementation?
Just found this https://github.com/TimelyDataflow/timely-dataflow and my heart skipped a beat.
-
Streaming processing in Python using Timely Dataflow with Bytewax
Bytewax is a Python native binding to the Timely Dataflow library (written in Rust) for building highly scalable streaming (and batch) processing pipelines.
-
Alternative Kafka Integration Framework to Kafka Connect?
I am working on Bytewax, which is a Python stream processing framework built on Timely Dataflow. It is not exactly a Kafka integration framework because it is a more of a general stream processing framework, but might be interesting for you. We are focused on enabling people to more easily debug, containerize, parallelize and customize and less on enabling a declarative integration framework. It is still early days for us! And we are looking for feedback and ideas from the community.
-
[AskJS] JavaScript for data processing
We used to use a library called Pond.js, https://github.com/esnet/pond, but the reliance on Immutable.JS caused some performance pitfalls, so we wrote a system from scratch that deals with data in a batched streaming fashion. A lot of the concepts were borrowed from a Rust library called timely-dataflow, https://github.com/TimelyDataflow/timely-dataflow.
-
Dataflow: An Efficient Data Processing Library for Machine Learning
Though the name "Dataflow" might be an unfortunate name conflict with another Rust project: https://github.com/TimelyDataflow/timely-dataflow
-
Ask HN: Is there a way to subscribe to an SQL query for changes?
> In the simplest case, I'm talking about regular SQL non-materialized views which are essentially inlined.
I see that now -- makes sense!
> Wish we had some better database primitives to assemble rather than building everything on Postgres - its not ideal for a lot of things.
I'm curious to hear more about this! We agree that better primitives are required and that's why Materialize is written in Rust using using TimelyDataflow[1] and DifferentialDataflow[2] (both developed by Materialize co-founder Frank McSherry). The only relationship between Materialize and Postgres is that we are wire-compatible with Postgres and we don't share any code with Postgres nor do we have a dependence on it.
[1] https://github.com/TimelyDataflow/timely-dataflow
-
7 Real-Time Data Streaming Tools You Should Consider On Your Next Project
Under the hood, Materialize uses Timely Dataflow (TDF) as the stream-processing engine. This allows Materialize to take advantage of the distributed data-parallel compute engine. The great thing about using TDF is that it has been in open source development since 2014 and has since been battle-tested in production at large Fortune 1000-scale companies.
What are some alternatives?
zombodb - Making Postgres and Elasticsearch work together like it's 2023
differential-datalog - DDlog is a programming language for incremental computation. It is well suited for writing programs that continuously update their output in response to input changes. A DDlog programmer does not write incremental algorithms; instead they specify the desired input-output mapping in a declarative manner.
realtime - Broadcast, Presence, and Postgres Changes via WebSockets
materialize - The data warehouse for operational workloads.
TablaM - The practical relational programing language for data-oriented applications
bytewax - Python Stream Processing
readyset - Readyset is a MySQL and Postgres wire-compatible caching layer that sits in front of existing databases to speed up queries and horizontally scale read throughput. Under the hood, ReadySet caches the results of cached select statements and incrementally updates these results over time as the underlying data changes.
mysql-live-select - NPM Package to provide events on updated MySQL SELECT result sets
differential-dataflow - An implementation of differential dataflow using timely dataflow on Rust.
flow - 🌊 Continuously synchronize the systems where your data lives, to the systems where you _want_ it to live, with Estuary Flow. 🌊