temporal_tables
litestream
Our great sponsors
temporal_tables | litestream | |
---|---|---|
16 | 165 | |
897 | 9,997 | |
- | - | |
4.2 | 7.5 | |
2 months ago | 9 days ago | |
C | Go | |
BSD 2-clause "Simplified" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
temporal_tables
-
All the ways to capture changes in Postgres
There is also the temporal_tables extension.
[0] https://github.com/arkhipov/temporal_tables
-
Show HN: I made a CMS that uses Git to store your data
- https://github.com/arkhipov/temporal_tables
I haven't used any of these but I work on https://xtdb.com which is also implementing SQL:2011's temporal features :)
-
Data point versioning infrastructure for time traveling to a precise point in time?
It seems like PG has this extension here anyone ever use it?
-
Questions about history table pattern
You could look at that or ask me questions about it (disclaimer, I am the author). Also there is https://github.com/arkhipov/temporal_tables/
- Modern solutions for database auditing?
- How Postgres Audit Tables Saved Us from Taking Down Production
-
spring-data-jpa-temporal: a lightweight temporal auditing library
All good. Note there is also https://github.com/arkhipov/temporal_tables/ (which is also type 4 as a postgres extension - pretty similar to what ebean orm is doing)
-
Time-travel options for databases
The Temporal Tables Postgres extension works well. https://github.com/arkhipov/temporal_tables
-
easy master<->master postgresql 11 cluster solution?
If you're doing this across regions, you really really should reconsider. If you're doing it in the same data center you might be able to get away with it (but then I'm not sure why you're doing it in the first place, if the system fits in one DC then you probably can just scale up). It might be worth considering a sharded & passively combined approach -- i.e. every country has it's own data, and there's some huge public schema which consists of all the data that is drip fed in to materialized views or tables at regular intervals. You could also combine this with temporal_tables to get a very delayed but theoretically time-consistent (well, aside from clock skew across regions of course...) view of your DB to query... Really depends on the use case.
-
SQLite the only database you will ever need in most cases
One of postgres's most underrated features. RLS is amazing, can be unseen/basically work silently if your programming language-side tools are good enough, and is documented well (like everything else):
https://www.postgresql.org/docs/current/ddl-rowsecurity.html
But the power of PG is that it doesn't stop there, if you combine this with a plugin like temporal_tables and you can segment by user and time:
https://github.com/arkhipov/temporal_tables
All of this mostly unknown to the thing that's accessing the DB. If that's not enough for you, why not add some auditing with pgaudit:
https://www.pgaudit.org/#section_three
I think it might not actually be hyperbole to say that Postgres is the greatest RDBMS database that has ever existed.
litestream
-
Ask HN: SQLite in Production?
I have not, but I keep meaning to collate everything I've learned into a set of useful defaults just to remind myself what settings I should be enabling and why.
Regarding Litestream, I learned pretty much all I know from their documentation: https://litestream.io/
-
How (and why) to run SQLite in production
This presentation is focused on the use-case of vertically scaling a single server and driving everything through that app server, which is running SQLite embedded within your application process.
This is the sweet-spot for SQLite applications, but there have been explorations and advances to running SQLite across a network of app servers. LiteFS (https://fly.io/docs/litefs/), the sibling to Litestream for backups (https://litestream.io), is aimed at precisely this use-case. Similarly, Turso (https://turso.tech) is a new-ish managed database company for running SQLite in a more traditional client-server distribution.
-
SQLite3 Replication: A Wizard's Guide🧙🏽
This post intends to help you setup replication for SQLite using Litestream.
-
Ask HN: Time travel" into a SQLite database using the WAL files?
I've been messing around with litestream. It is so cool. And, I either found a bug in the -timestamp switch or don't understand it correctly.
What I want to do is time travel into my sqlite database. I'm trying to do some forensics on why my web service returned the wrong data during a production event. Unfortunately, after the event, someone deleted records from the database and I'm unsure what the data looked like and am having trouble recreating the production issue.
Litestream has this great switch: -timestamp. If you use it (AFAICT) you can time travel into your database and go back to the database state at that moment. However, it does not seem to work as I expect it to:
https://github.com/benbjohnson/litestream/issues/564
I have the entirety of the sqlite database from the production event as well. Is there a way I could cycle through the WAL files and restore the database to the point in time before the records I need were deleted?
Will someone take sqlite and compile it into the browser using WASM so I can drag a sqlite database and WAL files into it and then using a timeline slider see all the states of the database over time? :)
-
Ask HN: Are you using SQLite and Litestream in production?
We're using SQLite in production very heavily with millions of databases and fairly high operations throughput.
But we did run into some scariness around trying to use Litestream that put me off it for the time being. Litestream is really cool but it is also very much a cool hack and the risk of database corruption issues feels very real.
The scariness I ran into was related to this issue https://github.com/benbjohnson/litestream/issues/510
-
Pocketbase: Open-source back end in 1 file
Litestream is a library that allows you to easily create backups. You can probably just do analytic queries on the backup data and reduce load on your server.
https://litestream.io/
- Litestream – Disaster recovery and continuous replication for SQLite
- Litestream: Replicated SQLite with no main and little cost
-
Why you should probably be using SQLite
One possible strategy is to have one directory/file per customer which is one SQLite file. But then as the user logs in, you have to look up first what database they should be connected to.
OR somehow derive it from the user ID/username. Keeping all the customer databases in a single directory/disk and then constantly "lite streaming" to S3.
Because each user is isolated, they'll be writing to their own database. But migrations would be a pain. They will have to be rolled out to each database separately.
One upside is, you can give users the ability to take their data with them, any time. It is just a single file.
[0]. https://litestream.io/
-
Monitor your Websites and Apps using Uptime Kuma
Upstream Kuma uses a local SQLite database to store account data, configuration for services to monitor, notification settings, and more. To make sure that our data is available across redeploys, we will bundle Uptime Kuma with Litestream, a project that implements streaming replication for SQLite databases to a remote object storage provider. Effectively, this allows us to treat the local SQLite database as if it were securely stored in a remote database.
What are some alternatives?
TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
rqlite - The lightweight, distributed relational database built on SQLite.
pg_bitemporal - Bitemporal tables in Postgres
pocketbase - Open Source realtime backend in 1 file
pgaudit - PostgreSQL Audit Extension
realtime - Broadcast, Presence, and Postgres Changes via WebSockets
dolt - Dolt – Git for Data
k8s-mediaserver-operator - Repository for k8s Mediaserver Operator project
datasette - An open source multi-tool for exploring and publishing data
sqlcipher - SQLCipher is a standalone fork of SQLite that adds 256 bit AES encryption of database files and other security features.
beekeeper-studio - Modern and easy to use SQL client for MySQL, Postgres, SQLite, SQL Server, and more. Linux, MacOS, and Windows.
litefs - FUSE-based file system for replicating SQLite databases across a cluster of machines