peerdb VS materialize

Compare peerdb vs materialize and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
peerdb materialize
7 120
1,816 5,598
15.4% 0.8%
9.9 10.0
6 days ago about 9 hours ago
Go Rust
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

peerdb

Posts with mentions or reviews of peerdb. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-06.
  • PeerDB Streams – Simple, Native Postgres Change Data Capture
    4 projects | news.ycombinator.com | 6 May 2024
  • Pgwire: a Rust library for PostgreSQL compatible application
    2 projects | news.ycombinator.com | 20 Mar 2024
    We at PeerDB (https://github.com/PeerDB-io/peerdb) were early adopters of Pgwire to implement our Postgres-compatible SQL Layer to do ETL. Very easy to work with. Saved us multiple months of effort to build it from scratch.
  • FLaNK AI Weekly 18 March 2024
    39 projects | dev.to | 18 Mar 2024
  • Show HN: Open-source x64 and Arm GitHub runners. Reduces GitHub Actions bill 10x
    7 projects | news.ycombinator.com | 30 Jan 2024
    We've been using the Ubicloud runner for a while at PeerDB[1]. Great value and specially the ARM runners have been helpful to get our CI costs down. The team is really responsive and added the arm runner support within a few weeks of us requesting it.

    [1] https://github.com/PeerDB-io/peerdb

  • Benchmarking Postgres Replication: PeerDB vs. Airbyte
    1 project | news.ycombinator.com | 10 Oct 2023
    Thanks for posting this question. Composite primary key support is actively being worked on and should be available in 1-2 weeks :) - https://github.com/PeerDB-io/peerdb/pull/499
  • Launch HN: PeerDB (YC S23) – Fast, Native ETL/ELT for Postgres
    2 projects | news.ycombinator.com | 27 Jul 2023
    Hi HN! I'm Sai, the co-founder and CEO of PeerDB (https://www.peerdb.io/), a Postgres-first data-movement platform that makes moving data in and out of Postgres fast and simple. PeerDB is free and open (https://github.com/PeerDB-io/peerdb) and we provide a Docker stack for users to try us out. Our repo is at https://github.com/PeerDB-io/peerdb and there’s a 5-minute quickstart here: https://docs.peerdb.io/quickstart.

    For the past 8 years, working at Microsoft on Postgres on Azure, and before that at Citus Data, I’ve worked closely with customers running Postgres at the heart of their data stack, storing anywhere from 10s of GB of data to 10s of TB.

    This was when I got exposed to the challenges customers faced when moving data in and out of Postgres. Usually they would try existing ETL tools, fail, and decide to build in-house solutions. Common issues with these tools included painfully slow syncs - syncing 100s of GB of data took days; flaky and unreliable - frequent crashes, loss of data precision on target etc., and; feature-limited - lack of configurability, unsupported data types and so on.

    I remember a specific scenario where a tool didn’t support something as simple as the Postgres’ COPY command to ingest data. This would have improved the throughput by orders of magnitude. We (customer and me) reached out to that company to request them to add this feature. They couldn’t prioritize this feature because it wasn’t very easy - their tech stack was designed to support 100s of connectors rather than supporting a native Postgres feature.

    After multiple such occurrences, I thought, why not build a tool specialized for Postgres, making the lives of many Postgres users easier. I reached out to my long-time buddy Kaushik, who was building operating systems at Google and had led data teams at Safegraph and Palantir. We spent a few weeks building an MVP that streamed data in real-time from Postgres to BigQuery. It was 10 times faster than existing tools and maintained data freshness of less than 30 seconds. We realized that there were many Postgres native and infrastructural optimizations we could do to provide a rich data-movement experience for Postgres users. This is when we decided to start PeerDB!

    We started with two main use cases: Real-time Change Data Capture from Postgres (demo: https://docs.peerdb.io/usecases/realtime-cdc#demo) and Real-time Streaming of query results from Postgres (demo: https://docs.peerdb.io/usecases/realtime-streaming-of-query-...). The 2nd demo shows PeerDB streaming a table with 100M rows from Postgres to Snowflake.

    We implement multiple optimizations to provide a fast, reliable, feature-rich experience. For performance, we can parallelize the initial load of a large table, still ensuring consistency. Syncing 100s of GB goes from days to minutes. We do this by logically partitioning the table based on internal tuple identifiers (CTID) and parallelly streaming those partitions (inspired by this DuckDB blog - https://duckdb.org/2022/09/30/postgres-scanner.html#parallel...)

    For CDC, we don’t use Debezium, rather handle replication more natively—reading the slot, replicating the changes, keeping state etc. We made this choice mainly for flexibility. Staying native helps us use existing and future Postgres enhancements more effectively. For example, if the order of rows across tables on the target is not important, we can parallelize reading of a single slot across multiple tables and improve performance. Our architecture is designed for real-time syncs, which enables data-freshness of a few 10s of seconds even at large throughputs (10k+ tps).

    We have fault tolerance mechanisms for reliability (https://blog.peerdb.io/using-temporal-to-scale-data-synchron...) and support multiple features including log-based (CDC) / query based streaming, efficient syncing of tables with large (TOAST) columns, configurable batching and parallelism to prevent OOMs and crashes etc.

    For usability - we provide a Postgres compatible SQL layer for data-movement. This makes the life of data engineers much easier. They can develop pipelines using a framework they are familiar with, without needing to deal with custom UIs and REST APIs. They can use Postgres' 100s of integrations to build and manage ETL. We extend Postgres' SQL grammar with a few new intuitive SQL commands to enable real-time data streaming across stores. Because of this, we were able to add dbt integration via Dagster (in private preview) in a few hours! We expect data-engineers to unravel similar integrations with PeerDB easily, and plan to make this grammar richer as we evolve.

    PeerDB consists of the following components to handle data replication: (1) PeerDB Server uses the pgwire protocol to mimic a PostgreSQL server, responsible for query routing and generating gRPC requests to the Flow API. It relies on AST analysis to make informed decisions on routing. (2) Flow API: an API layer that deals with gRPC commands, orchestrating the data sync operations; (3) Flow Workers execute the data read-write operations from the source to the destination. Built to scale horizontally, they interact with Temporal for increased resilience. The types of data replication supported include CDC streaming replication and query-based batch replication. Workers do all of the heavy lifting, and have data store specific optimizations.

    Currently we support 6 target data stores (BigQuery, Snowflake, Postgres, S3, Kafka etc) for data movement from Postgres. This doc captures the current status of the connectors: https://docs.peerdb.io/sql/commands/supported-connectors.

    As we spoke to more customers, we realized that getting data into PostgreSQL at scale is equally important and hard. For example one of our customers wants to periodically sync data in multiple SQL Server instances (running on the edge) to their centralized Postgres database. Requests for Oracle to Postgres migrations are also common. So now we’re also supporting source data stores with Postgres as the target (currently SQL Server and Postgres itself, with more to come).

    We are actively working with customers to onboard them to our self-hosted enterprise offering. Our fully hosted offering on the cloud is in private preview. We haven’t yet decided on the pricing. One common concern we’ve heard from customers is that existing tools are expensive and charge based on the amount of data transferred. To address this, we are considering a more transparent way of pricing—for example, pricing based on provisioned hardware (cpu, memory, disk). We’re open for feedback on this!

    Check out our github repo - https://github.com/PeerDB-io/peerdb and go ahead and give it a spin (5-minute quickstart https://docs.peerdb.io/quickstart).

    We want to provide the world’s best data-movement experience for Postgres. We would love to get your feedback on product experience, our thesis and anything else that comes to your mind. It would be super useful for us. Thank you!

materialize

Posts with mentions or reviews of materialize. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-17.
  • Ask HN: How Can I Make My Front End React to Database Changes in Real-Time?
    8 projects | news.ycombinator.com | 17 Apr 2024
    [2] https://materialize.com/
  • Choosing Between a Streaming Database and a Stream Processing Framework in Python
    10 projects | dev.to | 10 Feb 2024
    To fully leverage the data is the new oil concept, companies require a special database designed to manage vast amounts of data instantly. This need has led to different database forms, including NoSQL databases, vector databases, time-series databases, graph databases, in-memory databases, and in-memory data grids. Recent years have seen the rise of cloud-based streaming databases such as RisingWave, Materialize, DeltaStream, and TimePlus. While they each have distinct commercial and technical approaches, their overarching goal remains consistent: to offer users cloud-based streaming database services.
  • Proton, a fast and lightweight alternative to Apache Flink
    7 projects | news.ycombinator.com | 30 Jan 2024
    > Materialize no longer provide the latest code as an open-source software that you can download and try. It turned from a single binary design to cloud-only micro-service

    Materialize CTO here. Just wanted to clarify that Materialize has always been source available, not OSS. Since our initial release in 2020, we've been licensed under the Business Source License (BSL), like MariaDB and CockroachDB. Under the BSL, each release does eventually transition to Apache 2.0, four years after its initial release.

    Our core codebase is absolutely still publicly available on GitHub [0], and our developer guide for building and running Materialize on your own machine is still public [1].

    It is true that we substantially rearchitected Materialize in 2022 to be more "cloud-native". Our new cloud offering offers horizontal scalability and fault tolerance—our two most requested features in the single-binary days. I wouldn't call the new architecture a microservices design though! There are only 2-3 services, each quite substantial, in the new architecture (loosely: a compute service, an orchestration service, and, soon, a load balancing service).

    We do push folks to sign up for a free trial of our hosted cloud offering [2] these days, rather than trying to start off by running things locally, as we generally want folks' first impression of Materialize to be of the version that we support for production use cases. A all-in-one single machine Docker image does still exist, if you know where to look, but it's very much use-at-your-own-risk, and we don't recommend using it for anything serious, but it's there to support e.g. academic work that wants to evaluate Materialize's capabilities to incrementally maintain recursive SQL queries.

    If folks have questions about Materialize, we've got a lively community Slack [3] where you can connect directly with our product and engineering teams.

    [0]: https://github.com/MaterializeInc/materialize/tree/main

  • What I Talk About When I Talk About Query Optimizer (Part 1): IR Design
    7 projects | news.ycombinator.com | 29 Jan 2024
  • We Built a Streaming SQL Engine
    3 projects | news.ycombinator.com | 21 Oct 2023
    Some recent solutions to this problem include Differential Dataflow and Materialize. It would be neat if postgres adopted something similar for live-updating materialized views.

    https://github.com/timelydataflow/differential-dataflow

    https://materialize.com/

  • Ask HN: Who is hiring? (October 2023)
    9 projects | news.ycombinator.com | 2 Oct 2023
    Materialize | Full-Time | NYC Office or Remote | https://materialize.com

    Materialize is an Operational Data Warehouse: A cloud data warehouse with streaming internals, built for work that needs action on what’s happening right now. Keep the familiar SQL, keep the proven architecture of cloud warehouses but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date.

    Materialize is the operational data warehouse built from the ground up to meet the needs of modern data products: Fresh, Correct, Scalable — all in a familiar SQL UI.

    Senior/Staff Product Manager - https://grnh.se/69754ebf4us

    Senior Frontend Engineer - https://grnh.se/7010bdb64us

    ===

    Investors include Redpoint, Lightspeed and Kleiner Perkins.

  • Ask HN: Who is hiring? (June 2023)
    14 projects | news.ycombinator.com | 1 Jun 2023
    Materialize | EM (Compute), Senior PM | New York, New York | https://materialize.com/

    You shouldn't have to throw away the database to build with fast-changing data. Keep the familiar SQL, keep the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date.

    That is Materialize, the only true SQL streaming database built from the ground up to meet the needs of modern data products: Fresh, Correct, Scalable — all in a familiar SQL UI.

    Engineering Manager, Compute - https://grnh.se/4e14099f4us

    Senior Product Manager - https://grnh.se/587c36804us

    VP of Marketing - https://grnh.se/9caac4b04us

  • What are your favorite tools or components in the Kafka ecosystem?
    10 projects | /r/apachekafka | 31 May 2023
  • Ask HN: Who is hiring? (May 2023)
    13 projects | news.ycombinator.com | 1 May 2023
  • Dozer: A scalable Real-Time Data APIs backend written in Rust
    6 projects | /r/rust | 10 Apr 2023
    How does it compare to https://materialize.com/ ?

What are some alternatives?

When comparing peerdb and materialize you can also consider the following projects:

pglogical - Logical Replication extension for PostgreSQL 15, 14, 13, 12, 11, 10, 9.6, 9.5, 9.4 (Postgres), providing much faster replication than Slony, Bucardo or Londiste, as well as cross-version upgrades.

ClickHouse - ClickHouse® is a free analytics DBMS for big data

transfer - Database replication platform that leverages change data capture. Stream production data from databases to your data warehouse (Snowflake, BigQuery, Redshift) in real-time.

risingwave - SQL stream processing, analytics, and management. We decouple storage and compute to offer speedy bootstrapping, dynamic scaling, time-travel queries, and efficient joins.

realtime - Broadcast, Presence, and Postgres Changes via WebSockets

openpilot - openpilot is an open source driver assistance system. openpilot performs the functions of Automated Lane Centering and Adaptive Cruise Control for 250+ supported car makes and models.

cloudquery - The open source high performance ELT framework powered by Apache Arrow

rust-kafka-101 - Getting started with Rust and Kafka

bytebase - The GitHub/GitLab for database DevOps. World's most advanced database DevOps and CI/CD for Developer, DBA and Platform Engineering teams.

dbt-expectations - Port(ish) of Great Expectations to dbt test macros

astro-sdk - Astro SDK allows rapid and clean development of {Extract, Load, Transform} workflows using Python and SQL, powered by Apache Airflow.

scryer-prolog - A modern Prolog implementation written mostly in Rust.