connectors
temporal_tables
connectors | temporal_tables | |
---|---|---|
3 | 16 | |
33 | 900 | |
- | - | |
9.9 | 4.2 | |
3 days ago | 3 months ago | |
Go | C | |
GNU General Public License v3.0 or later | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
connectors
-
All the ways to capture changes in Postgres
No. We implemented our own [1] for a few reasons:
* Scaling well to multi-TB DBs without pinning the write-ahead log (potentially filling your DB's disk) while the backfill is happening. Instead, our connector constantly reads the WAL and works well in setups like Supabase that have very restrictive WAL sizes (1GB iirc).
* Incremental fault-tolerant backfills that can be stopped and resumed at will.
* Being able to offer "precise" captures which are logically consistent in terms of the sequence of create/update/delete events.
The last one becomes really interesting when paired with REPLICA IDENTITY FULL, because you can feed it into an incremental computation (perhaps differential dataflow) for streaming updates of a continuous computation.
Our work is based off of the Netflix DBLog paper, which we took and ran with.
[1] https://github.com/estuary/connectors/tree/main/source-postg...
-
Why would you ever not use CDC for ELT?
Our connectors themselves are fully OSS (for example, here's PostgreSQL)
-
What Is Dbt and Why Are Companies Using It?
We've used https://github.com/estuary/connectors/pkgs/container/source-... to load data sets in the many terabytes. Caveat that, while it's implemented to Airbyte's spec, we've only used it with Flow.
temporal_tables
-
All the ways to capture changes in Postgres
There is also the temporal_tables extension.
[0] https://github.com/arkhipov/temporal_tables
-
Show HN: I made a CMS that uses Git to store your data
- https://github.com/arkhipov/temporal_tables
I haven't used any of these but I work on https://xtdb.com which is also implementing SQL:2011's temporal features :)
-
Data point versioning infrastructure for time traveling to a precise point in time?
It seems like PG has this extension here anyone ever use it?
-
Questions about history table pattern
You could look at that or ask me questions about it (disclaimer, I am the author). Also there is https://github.com/arkhipov/temporal_tables/
- Modern solutions for database auditing?
- How Postgres Audit Tables Saved Us from Taking Down Production
-
spring-data-jpa-temporal: a lightweight temporal auditing library
All good. Note there is also https://github.com/arkhipov/temporal_tables/ (which is also type 4 as a postgres extension - pretty similar to what ebean orm is doing)
-
Time-travel options for databases
The Temporal Tables Postgres extension works well. https://github.com/arkhipov/temporal_tables
-
easy master<->master postgresql 11 cluster solution?
If you're doing this across regions, you really really should reconsider. If you're doing it in the same data center you might be able to get away with it (but then I'm not sure why you're doing it in the first place, if the system fits in one DC then you probably can just scale up). It might be worth considering a sharded & passively combined approach -- i.e. every country has it's own data, and there's some huge public schema which consists of all the data that is drip fed in to materialized views or tables at regular intervals. You could also combine this with temporal_tables to get a very delayed but theoretically time-consistent (well, aside from clock skew across regions of course...) view of your DB to query... Really depends on the use case.
-
SQLite the only database you will ever need in most cases
One of postgres's most underrated features. RLS is amazing, can be unseen/basically work silently if your programming language-side tools are good enough, and is documented well (like everything else):
https://www.postgresql.org/docs/current/ddl-rowsecurity.html
But the power of PG is that it doesn't stop there, if you combine this with a plugin like temporal_tables and you can segment by user and time:
https://github.com/arkhipov/temporal_tables
All of this mostly unknown to the thing that's accessing the DB. If that's not enough for you, why not add some auditing with pgaudit:
https://www.pgaudit.org/#section_three
I think it might not actually be hyperbole to say that Postgres is the greatest RDBMS database that has ever existed.
What are some alternatives?
walex - Postgres change events (CDC) in Elixir
TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
pg-event-proxy-example - Send NOTIFY and WAL events from PostgreSQL to upstream services (amqp / redis / mqtt)
pg_bitemporal - Bitemporal tables in Postgres
temporal_tables - Postgresql temporal_tables extension in PL/pgSQL, without the need for external c extension.
pgaudit - PostgreSQL Audit Extension
maxwell - Maxwell's daemon, a mysql-to-json kafka producer
dolt - Dolt – Git for Data
debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
datasette - An open source multi-tool for exploring and publishing data
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
beekeeper-studio - Modern and easy to use SQL client for MySQL, Postgres, SQLite, SQL Server, and more. Linux, MacOS, and Windows.