debezium
iceberg
Our great sponsors
debezium | iceberg | |
---|---|---|
80 | 18 | |
9,843 | 5,481 | |
1.8% | 3.5% | |
9.9 | 9.9 | |
4 days ago | 5 days ago | |
Java | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
debezium
-
Choosing Between a Streaming Database and a Stream Processing Framework in Python
They manage data in the application layer and your original data stays where it is. This way data consistency is no longer an issue as it was with streaming databases. You can use Change Data Capture (CDC) services like Debezium by directly connecting to your primary database, doing computational work, and saving the result back or sending real-time data to output streams.
-
Generating Avro Schemas from Go types
Both of these articles mention a key player, Debezium. In fact, Debezium has had a place in the modern infrastructure. Let's use a diagram to understand why.
-
debezium VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
How the heck do I validate records with this kind of data??
This might be overkill, but you could use an extra tool like https://debezium.io to capture logs about all creates, updates, and deletes in your table
- All the ways to capture changes in Postgres
-
Managed Relational Databases with AWS RDS and Aurora
If you're considering a relational database for an event-driven architecture, check out Debezium. It lets you stream changes to relational databases, and subscribe to change events.
-
Real-time Data Processing Pipeline With MongoDB, Kafka, Debezium And RisingWave
Debezium
-
Postgresql to hadoop in real time
https://debezium.io/ comes to mind as an open source product, but there are a gazillion of these tools out there.
-
ClickHouse Advanced Tutorial: Apply CDC from MySQL to ClickHouse
Contrary to what it sounds, it’s quite straightforward. The database changes are captured via Debezium and published as events on Apache Kafka. ClickHouse consumes those changes in partial order by Kafka Engine. Real-time and eventually consistent.
- Debezium: Stream Changes from Your Database
iceberg
- Iceberg won the table format war: But not in the way you thought it might
- Lakehouse using AWS Athena on Iceberg Concerns
- apache/iceberg: Apache Iceberg
- What are the main things I need to know to be hired as a Java developer?
- Have you used Athena Iceberg for small(-ish) data?
- Is Data Lakehouse a threat to Snowflake?
-
Snowflake vs databricks cloud/labor cost
This is interesting, imo.
- Setting the Table: Benchmarking Open Table Formats
-
Spark Dynamic Partition Overwrite Mode Replaces Existing Data
If you're using Iceberg as your table format, it had bugs with MERGE INTO with non-nullable columns up until September: https://github.com/apache/iceberg/pull/5679
-
How to migrate delta tables to iceberg?
yeah, this as a capability is a WIP and discussion point in the iceberg community - https://github.com/apache/iceberg/pull/5331
What are some alternatives?
maxwell - Maxwell's daemon, a mysql-to-json kafka producer
kudu - Mirror of Apache Kudu
kafka-connect-bigquery - A Kafka Connect BigQuery sink connector
hudi - Upserts, Deletes And Incremental Processing on Big Data.
realtime - Broadcast, Presence, and Postgres Changes via WebSockets
Apache Avro - Apache Avro is a data serialization system.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
RocksDB - A library that provides an embeddable, persistent key-value store for fast storage.
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
Dask - Parallel computing with task scheduling