ldbc_snb_bi
materialize
ldbc_snb_bi | materialize | |
---|---|---|
3 | 120 | |
33 | 5,598 | |
- | 1.0% | |
7.7 | 10.0 | |
3 months ago | 2 days ago | |
Python | Rust | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ldbc_snb_bi
-
Demand the impossible: rigorous database benchmarking
Rigorous database benchmarking is indeed very difficult and time-consuming. I spent the last ~7 years working on benchmarks for graph processing systems in the Linked Data Benchmark Council (LDBC) [1], originally established in 2012 as an EU research project.
LDBC creates TPC-style application-level database benchmarks which can be used for system-to-system comparison. We provide detailed specifications, data generators, benchmark frameworks, and multiple reference implementations. The benchmarks are implemented by vendors for their database products, and the implementations submitted to be run by independent third-party auditors to ensure their correctness and reproducibility.
We have found that there is a market for audits for graph processing systems, albeit it is quite small: over the last 4 years, we have published 34 audited results, see e.g. [2] and [3].
A major problem we face is that process of implementing the benchmark for a system and getting an audited result is long (and therefore expensive). Vendors spend months implementing the and tuning the benchmarks. It is also typical for the auditor to spend 50+ hours on the auditing process, which includes a lengthy code review step, setting up the system, running the experiments, testing ACID properties, writing a report, etc. The length of the process is exacerbated by the lack of standard graph query languages. This potentially necessitates the auditor to learn a new query language or programming language.
We have tried to mitigate this problem by improving our documentation, creating more reference implementation, distributing pre-generated data sets. There are new standard graph query languages (SQL/PGQ, GQL) but their adoption is still very limited. Overall, the auditing process is quite long, which is mainly caused by the essential complexity of the problem: implementing an application-level benchmark and getting reliable results is very difficult.
[1] https://ldbcouncil.org/introduction/
[2] https://ldbcouncil.org/benchmarks/snb-interactive
[3] https://ldbcouncil.org/benchmarks/snb-bi/
-
Benchgraph Backstory: The Untapped Potential
At first, the plan was to use only the LDBC dataset and write different queries for the dataset, but LDBC has a set of well-designed queries that were specifically prepared to stress the database. Each query targets a special scenario, also called “chock point.” Not to be mistaken, they do not have deep graph traversal doing around 100 hops, but they are definitely more complex than the ones written for the Pokec dataset. There are two sets of queries for the LDBC SNB: interactive and business intelligence. LDBC provides a reference Cypher implementation for both of these queries for Neo4j. We took those queries, tweaked the data types, and made the queries work on Memgraph. Again, to be perfectly clear, this is NOT an official implementation of an LDBC Benchmark; this goes for both interactive and business intelligence queries. The queries were used as the basis for running the benchmark.
-
Postgres: The Graph Database You Didn't Know You Had
I designed and maintain several graph benchmarks in the Linked Data Benchmark Council, including workloads aimed for databases [1]. We make no restrictions on implementations, they can any query language like Cypher, SQL, etc.
In our last benchmark aimed at analytical systems [2], we found that SQL queries using WITH RECURSIVE can work for expressing reachability and even weighted shortest path queries. However, formulating an efficient algorithm yields very complex SQL queries [3] and their execution requires a system with a sophisticated optimizer such as Umbra developed at TU Munich [4]. Industry SQL systems are not yet at this level but they may attain that sometime in the future.
Another direction to include graph queries in SQL is the upcoming SQL/PGQ (Property Graph Queries) extension. I'm involved in a project at CWI Amsterdam to incorporate this language into DuckDB [5].
[1] https://ldbcouncil.org/benchmarks/snb/
[2] https://www.vldb.org/pvldb/vol16/p877-szarnyas.pdf
[3] https://github.com/ldbc/ldbc_snb_bi/blob/main/umbra/queries/...
[4] https://umbra-db.com/
[5] https://www.cidrdb.org/cidr2023/slides/p66-wolde-slides.pdf
materialize
-
Ask HN: How Can I Make My Front End React to Database Changes in Real-Time?
[2] https://materialize.com/
-
Choosing Between a Streaming Database and a Stream Processing Framework in Python
To fully leverage the data is the new oil concept, companies require a special database designed to manage vast amounts of data instantly. This need has led to different database forms, including NoSQL databases, vector databases, time-series databases, graph databases, in-memory databases, and in-memory data grids. Recent years have seen the rise of cloud-based streaming databases such as RisingWave, Materialize, DeltaStream, and TimePlus. While they each have distinct commercial and technical approaches, their overarching goal remains consistent: to offer users cloud-based streaming database services.
-
Proton, a fast and lightweight alternative to Apache Flink
> Materialize no longer provide the latest code as an open-source software that you can download and try. It turned from a single binary design to cloud-only micro-service
Materialize CTO here. Just wanted to clarify that Materialize has always been source available, not OSS. Since our initial release in 2020, we've been licensed under the Business Source License (BSL), like MariaDB and CockroachDB. Under the BSL, each release does eventually transition to Apache 2.0, four years after its initial release.
Our core codebase is absolutely still publicly available on GitHub [0], and our developer guide for building and running Materialize on your own machine is still public [1].
It is true that we substantially rearchitected Materialize in 2022 to be more "cloud-native". Our new cloud offering offers horizontal scalability and fault tolerance—our two most requested features in the single-binary days. I wouldn't call the new architecture a microservices design though! There are only 2-3 services, each quite substantial, in the new architecture (loosely: a compute service, an orchestration service, and, soon, a load balancing service).
We do push folks to sign up for a free trial of our hosted cloud offering [2] these days, rather than trying to start off by running things locally, as we generally want folks' first impression of Materialize to be of the version that we support for production use cases. A all-in-one single machine Docker image does still exist, if you know where to look, but it's very much use-at-your-own-risk, and we don't recommend using it for anything serious, but it's there to support e.g. academic work that wants to evaluate Materialize's capabilities to incrementally maintain recursive SQL queries.
If folks have questions about Materialize, we've got a lively community Slack [3] where you can connect directly with our product and engineering teams.
[0]: https://github.com/MaterializeInc/materialize/tree/main
- What I Talk About When I Talk About Query Optimizer (Part 1): IR Design
-
We Built a Streaming SQL Engine
Some recent solutions to this problem include Differential Dataflow and Materialize. It would be neat if postgres adopted something similar for live-updating materialized views.
https://github.com/timelydataflow/differential-dataflow
https://materialize.com/
-
Ask HN: Who is hiring? (October 2023)
Materialize | Full-Time | NYC Office or Remote | https://materialize.com
Materialize is an Operational Data Warehouse: A cloud data warehouse with streaming internals, built for work that needs action on what’s happening right now. Keep the familiar SQL, keep the proven architecture of cloud warehouses but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date.
Materialize is the operational data warehouse built from the ground up to meet the needs of modern data products: Fresh, Correct, Scalable — all in a familiar SQL UI.
Senior/Staff Product Manager - https://grnh.se/69754ebf4us
Senior Frontend Engineer - https://grnh.se/7010bdb64us
===
Investors include Redpoint, Lightspeed and Kleiner Perkins.
-
Ask HN: Who is hiring? (June 2023)
Materialize | EM (Compute), Senior PM | New York, New York | https://materialize.com/
You shouldn't have to throw away the database to build with fast-changing data. Keep the familiar SQL, keep the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date.
That is Materialize, the only true SQL streaming database built from the ground up to meet the needs of modern data products: Fresh, Correct, Scalable — all in a familiar SQL UI.
Engineering Manager, Compute - https://grnh.se/4e14099f4us
Senior Product Manager - https://grnh.se/587c36804us
VP of Marketing - https://grnh.se/9caac4b04us
- What are your favorite tools or components in the Kafka ecosystem?
- Ask HN: Who is hiring? (May 2023)
-
Dozer: A scalable Real-Time Data APIs backend written in Rust
How does it compare to https://materialize.com/ ?
What are some alternatives?
ldbc_snb_datagen_spark - Synthetic graph generator for the LDBC Social Network Benchmark, running on Spark
ClickHouse - ClickHouse® is a free analytics DBMS for big data
spicedb - Open Source, Google Zanzibar-inspired permissions database to enable fine-grained access control for customer applications
risingwave - SQL stream processing, analytics, and management. We decouple storage and compute to offer speedy bootstrapping, dynamic scaling, time-travel queries, and efficient joins.
ldbc_snb_interactive_v1_impls - Reference implementations for LDBC Social Network Benchmark's Interactive workload.
openpilot - openpilot is an open source driver assistance system. openpilot performs the functions of Automated Lane Centering and Adaptive Cruise Control for 250+ supported car makes and models.
Apache AGE - Graph database optimized for fast analysis and real-time data processing. It is provided as an extension to PostgreSQL.
rust-kafka-101 - Getting started with Rust and Kafka
clair - Vulnerability Static Analysis for Containers
dbt-expectations - Port(ish) of Great Expectations to dbt test macros
quine - Quine • a streaming graph • https://quine.io • Discord: https://discord.gg/GMhd8TE4MR
scryer-prolog - A modern Prolog implementation written mostly in Rust.