pggen
litestream
Our great sponsors
pggen | litestream | |
---|---|---|
11 | 165 | |
268 | 9,964 | |
- | - | |
6.6 | 7.5 | |
3 months ago | 11 days ago | |
Go | Go | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pggen
-
Ask HN: ORM or Native SQL?
Cornucopia is neat. I wrote a similar library in Go [1] so I'm very interested in comparing design decisions.
The pros of the generated code per query approach:
- App code is coupled to query outputs and inputs (an API of sorts), not database tables. Therefore, you can refactor your DB without changing app code.
- Real SQL with the full breadth of DB features.
- Real type-checking with what the DB supports.
The cons:
- Type mapping is surprisingly hard to get right, especially with composite types and arrays and custom type converters. For example, a query might return multiple jsonb columns but the app code wants to parse them into different structs.
- Dynamic queries don't work with prepared statements. Prepared statements only support values, not identifiers or scalar SQL sub-queries, so the codegen layer needs a mechanism to template SQL. I haven't built this out yet but would like to.
[1]: https://github.com/jschaf/pggen
-
What are the things with Go that have made you wish you were back in Spring/.NET/Django etc?
pggen is another fantastic library in this genre, which specifically targets postgres. It is driven by pgx. Can not recommend enough.
-
Exiting the Vietnam of Programming: Our Journey in Dropping the ORM (In Golang)
> Do you write out 120 "INSERT" statements, 120 "UPDATE" statements, 120 "DELETE" statements as raw strings
Yes. For example: https://github.com/jschaf/pggen/blob/main/example/erp/order/....
> that is also using an ORM
ORM as a term covers a wide swathe of usage. In the smallest definition, an ORM converts DB tuples to Go structs. In common usage, most folks use ORM to mean a generic query builder plus the type conversion from tuples to structs. For other usages, I prefer the Patterns of Enterprise Application Architecture terms [1] like data-mapper, active record, and table-data gateway.
[1]: https://martinfowler.com/eaaCatalog/
-
Back to basics: Writing an application using Go and PostgreSQL
You might like pggen (I’m the author) which only supports Postgres and pgx. https://github.com/jschaf/pggen
pggen occupies the same design space as sqlc but the implementations are quite different. Sqlc figures out the query types using type inference in Go which is nice because you don’t need Postgres at build time. Pggen asks Postgres what the query types are which is nice because it works with any extensions and arbitrarily complex queries.
-
How We Went All In on sqlc/pgx for Postgres + Go
Any reason to use sqlc over pggen ? If you use Postgres, it seems like the superior option.
- We Went All in on Sqlc/Pgx for Postgres and Go
-
What are your favorite packages to use?
Agree with your choices, except go-json which I never tried. pggen is fantastic. Love that library. The underlying driver, pgx, is also really well written.
-
I don't want to learn your garbage query language
You might like the approach I took with pggen[1] which was inspired by sqlc[2]. You write a SQL query in regular SQL and the tool generates a type-safe Go querier struct with a method for each query.
The primary benefit of pggen and sqlc is that you don't need a different query model; it's just SQL and the tools automate the mapping between database rows and Go structs.
[1]: https://github.com/jschaf/pggen
[2]: https://github.com/kyleconroy/sqlc
-
What is the best way to use PostgreSQL with Go?
I created pggen a few weeks ago to create my preferred method of database interaction: I write real SQL queries and I use generated, type-safe Go interfaces to the queries. https://github.com/jschaf/pggen
litestream
-
Ask HN: SQLite in Production?
I have not, but I keep meaning to collate everything I've learned into a set of useful defaults just to remind myself what settings I should be enabling and why.
Regarding Litestream, I learned pretty much all I know from their documentation: https://litestream.io/
-
How (and why) to run SQLite in production
This presentation is focused on the use-case of vertically scaling a single server and driving everything through that app server, which is running SQLite embedded within your application process.
This is the sweet-spot for SQLite applications, but there have been explorations and advances to running SQLite across a network of app servers. LiteFS (https://fly.io/docs/litefs/), the sibling to Litestream for backups (https://litestream.io), is aimed at precisely this use-case. Similarly, Turso (https://turso.tech) is a new-ish managed database company for running SQLite in a more traditional client-server distribution.
-
SQLite3 Replication: A Wizard's Guide🧙🏽
This post intends to help you setup replication for SQLite using Litestream.
-
Ask HN: Time travel" into a SQLite database using the WAL files?
I've been messing around with litestream. It is so cool. And, I either found a bug in the -timestamp switch or don't understand it correctly.
What I want to do is time travel into my sqlite database. I'm trying to do some forensics on why my web service returned the wrong data during a production event. Unfortunately, after the event, someone deleted records from the database and I'm unsure what the data looked like and am having trouble recreating the production issue.
Litestream has this great switch: -timestamp. If you use it (AFAICT) you can time travel into your database and go back to the database state at that moment. However, it does not seem to work as I expect it to:
https://github.com/benbjohnson/litestream/issues/564
I have the entirety of the sqlite database from the production event as well. Is there a way I could cycle through the WAL files and restore the database to the point in time before the records I need were deleted?
Will someone take sqlite and compile it into the browser using WASM so I can drag a sqlite database and WAL files into it and then using a timeline slider see all the states of the database over time? :)
-
Ask HN: Are you using SQLite and Litestream in production?
We're using SQLite in production very heavily with millions of databases and fairly high operations throughput.
But we did run into some scariness around trying to use Litestream that put me off it for the time being. Litestream is really cool but it is also very much a cool hack and the risk of database corruption issues feels very real.
The scariness I ran into was related to this issue https://github.com/benbjohnson/litestream/issues/510
-
Pocketbase: Open-source back end in 1 file
Litestream is a library that allows you to easily create backups. You can probably just do analytic queries on the backup data and reduce load on your server.
https://litestream.io/
- Litestream – Disaster recovery and continuous replication for SQLite
- Litestream: Replicated SQLite with no main and little cost
-
Why you should probably be using SQLite
One possible strategy is to have one directory/file per customer which is one SQLite file. But then as the user logs in, you have to look up first what database they should be connected to.
OR somehow derive it from the user ID/username. Keeping all the customer databases in a single directory/disk and then constantly "lite streaming" to S3.
Because each user is isolated, they'll be writing to their own database. But migrations would be a pain. They will have to be rolled out to each database separately.
One upside is, you can give users the ability to take their data with them, any time. It is just a single file.
[0]. https://litestream.io/
-
Monitor your Websites and Apps using Uptime Kuma
Upstream Kuma uses a local SQLite database to store account data, configuration for services to monitor, notification settings, and more. To make sure that our data is available across redeploys, we will bundle Uptime Kuma with Litestream, a project that implements streaming replication for SQLite databases to a remote object storage provider. Effectively, this allows us to treat the local SQLite database as if it were securely stored in a remote database.
What are some alternatives?
sqlc - Generate type-safe code from SQL
rqlite - The lightweight, distributed relational database built on SQLite.
SQLBoiler - Generate a Go ORM tailored to your database schema.
pocketbase - Open Source realtime backend in 1 file
sqlpp11 - A type safe SQL template library for C++
realtime - Broadcast, Presence, and Postgres Changes via WebSockets
pggen - A database first code generator focused on postgres
k8s-mediaserver-operator - Repository for k8s Mediaserver Operator project
SqlKata Query Builder - SQL query builder, written in c#, helps you build complex queries easily, supports SqlServer, MySql, PostgreSql, Oracle, Sqlite and Firebird
sqlcipher - SQLCipher is a standalone fork of SQLite that adds 256 bit AES encryption of database files and other security features.
honeysql - Turn Clojure data structures into SQL
litefs - FUSE-based file system for replicating SQLite databases across a cluster of machines