pgreplay VS simonwillisonblog-backup

Compare pgreplay vs simonwillisonblog-backup and see what are their differences.

pgreplay

pgreplay reads a PostgreSQL log file (*not* a WAL file), extracts the SQL statements and executes them in the same order and relative time against a PostgreSQL database cluster. (by laurenz)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
pgreplay simonwillisonblog-backup
2 7
205 15
- -
4.2 9.9
7 months ago 1 day ago
C
GNU General Public License v3.0 or later -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

pgreplay

Posts with mentions or reviews of pgreplay. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-20.
  • Versioning data in Postgres? Testing a Git like approach
    9 projects | news.ycombinator.com | 20 Oct 2023
    pgreplay parses not the WAL Write Ahead Log but the log file: https://github.com/laurenz/pgreplay

    From "A PostgreSQL Docker container that automatically upgrades your database" (2023) https://news.ycombinator.com/item?id=36748041 :

    pgkit wraps Postgres PITR backup and recovery:

  • Real Application Testing on 🚀YugabyteDB with 🐘pgreplay
    1 project | dev.to | 3 Oct 2022
    This blog was just to verify that it works with YugabyteDB. Check pgreplay documentation for more, all works the same in YugabyteDB. If you want to capture a workload from connections on multiple database nodes, each one will have their logfile. You can merge them. The Session ID (the 6th field in the csvlog built from start time and backend pid will probably not collide with another one, but you can make it unique by concatenating a node number if you want). The replay connects to one node, but though a HA proxy the connections can be distributed to multiple ones. All depends on what you want to capture and wh you want to replay. Capturing from PostgreSQL and replaying to YugabyteDB is also a good way to check that all works the same without performance regressions.

simonwillisonblog-backup

Posts with mentions or reviews of simonwillisonblog-backup. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-02.
  • Tracking SQLite Database Changes in Git
    7 projects | news.ycombinator.com | 2 Nov 2023
    > I’ve been running that for a couple of years in this repo: https://github.com/simonw/simonwillisonblog-backup - which provides a backup of my blog’s PostgreSQL Django database (first converted to SQLite and then dumped out using sqlite-

    I'm curious, what is the reason you chose not to use pgdump, but instead opted to convert to to sqlite and then dump the DB using sqlite-diffable?

    On a project I'm working on, I'd like to dump our Postgres schema into individual files for each object (i.e., one file for each table, function, stored proc, etc.), but haven't spent enough time to see if pgdump could actually do that. We're just outputting files by object type for now (one tables, function, and stored procs files).

  • Versioning data in Postgres? Testing a Git like approach
    9 projects | news.ycombinator.com | 20 Oct 2023
  • WordPress Core to start using SQLite Database
    5 projects | news.ycombinator.com | 26 Jul 2023
    My personal blog runs on Django + PostgreSQL, and I got fed up of not having a version history of changes I made to my content there.

    I solved that by setting up a GitHub repo that mirrors the content from my database to flat files a few times a day and commits any changes.

    It's worked out really well so far. It wasn't much trouble to setup and it's now been running for nearly three years, capturing 1400+ changes.

    I'd absolutely consider using the same technique for a commercial project in the future:

    Latest commits are here: https://github.com/simonw/simonwillisonblog-backup/commits/m...

    Workflow is https://github.com/simonw/simonwillisonblog-backup/blob/main...

  • How Postgres Triggers Can Simplify Your Back End Development
    2 projects | news.ycombinator.com | 23 Apr 2023
    If you really, really need to be able to see a SQL schema representing the current state, a cheap trick is to run an automation on every deploy that snapshots the schema and writes it to a GitHub repository.

    I do a version of that for my own (Django-powered) blog here: https://github.com/simonw/simonwillisonblog-backup/blob/main...

  • Blog with Markdown and Git, and degrade gracefully through time
    14 projects | news.ycombinator.com | 8 Feb 2021
    My blog is Django and PostgreSQL on Heroku, but last year I decided I wanted a reliable long-term public backup... so I set up a scheduled GitHub Actions workflow to back it up to a git repository.

    Bonus feature: since it runs nightly it gives me diffs if changes I make to my content, including edits to old posts.

    The backups are in this repo: https://github.com/simonw/simonwillisonblog-backup