drydock
pggen
drydock | pggen | |
---|---|---|
3 | 11 | |
6 | 269 | |
- | - | |
0.0 | 6.6 | |
almost 2 years ago | 3 months ago | |
Go | Go | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
drydock
-
SQLite in Go, with and Without Cgo
I have been using SQLite in Go projects for a few years now. During early stages of development I always start with SQLite as the main database, then when the project matures, I usually add support for PostgreSQL.
(I usually make a Store interface which is application specific and doesn't even assume there is an SQL database underneath. Then I make "driver" packages for each storage system - be it PostgreSQL, SQLite, flat files, timeseries etc. I have only one set of unit tests that is then run against all drivers. And when I have a caching layer, I also run all the unit tests with or without caching. The cache is usually just an adapter that wraps a Store type. I maintain separate schemas and drivers for each "driver" because I have found that this is actually faster and easier than trying to make generic SQL drivers for instance.)
However, I always keep the SQLite support and it is usually the default when you start up the application without explicitly specifying a database. This means that it is easy for other developers to do ad-hoc experiments or even create integration tests without having to fire up a database, which even when you are able to do it quickly, still takes time and effort. In production you usually want to point to a PostgreSQL (or other) database. Usually, but not always.
I also use it extensively in unit tests (often creating and destroying in-memory databases hundreds of times during just a couple of seconds of tests). I run all my tests on every build while developing and then speed matters a lot. When testing with PostgreSQL I usually set a build tag that specifies that I want to run the tests against PostgreSQL as well. I always want to run all the database tests - I don't always need to run them against PostgreSQL
(Actually, I made a quick hack called Drydock which takes care of creating a PostgreSQL instance and creates one database per test. This is experimental, but I've gotten a lot of use out of it: https://github.com/borud/drydock)
The reason I do this is that it results in much quicker turnaround during the initial phase when the data model may go through several complete rewrites. The lack of friction is significant.
SQLite has actually surprised me. I use it in a project where I routinely have tens of millions of rows in the biggest table. And it still performs well enough at well north of 100M rows. I wouldn't recommend it in production, but for a surprising number of systems you could if you wanted to.
The transpiled SQLite is very interesting to me for two reasons. It makes cross compiling a lot less complex. I make extensive use of Go and SQLite on embedded ARM platforms and then you either have to choose between compiling on the target platform or mess around with C libraries. It also eliminates the need to do two stage Docker builds (which cuts down building Docker images from 50+ seconds to perhaps 4-5 seconds).
The transpiled version is slower by quite a lot. I haven't done a systematic benchmark, but I noticed that a server that stores 30-40 datapoints per second went from 0.5% average CPU load to about 2% average CPU load. I'm not terribly worried about it, but it does mean that when I increase the influx of data I'm most likely going to hit a wall sooner.
I'll be using the transpiled SQLite a lot more in the coming year and I'll be on the Gophers Slack so if anyone is interested in sharing experiences, discussing SQLite in Go, please don't be shy.
-
Exiting the Vietnam of Programming: Our Journey in Dropping the ORM (In Golang)
This isn't new. A lot of applications and libraries do this. And I think it is a good way to design things.
Usually the database I use to develop a SQL schema is Sqlite3, since it allows for really nice testing. Then I add PostgreSQL support (which requires more involved testing setup, but I have a library that makes this somewhat easier: https://github.com/borud/drydock). (SQLite being in C is a bit of a problem since it means I can't get a purely statically linked binary on all platforms - at least I haven't found a way to do that except on Linux. So if anyone has some opinions on alternatives in pure Go, I'm all ears)
In the Java days JDBC every single method implementing some operation would be a lot of boilerplate. JDBC wasn't a very good API. But in Go that is much less of a problem. In part because you have struct tags, and libraries like Sqlx. To that I also add some helper functions to deal with result/error combos. Turns out the majority of my interactions with SQL databases can be carried out in 1-3 lines of code - with a surprising number of cases just being a oneliner. (The performance hit from using Sqlx is in most cases so minimal it doesn't matter. If it matters to you: use Sqlx when modeling and evolving the persistence, and then optimize it out if you must. I think I've done that just once in about 100kLOC worth of code written over the last few years).
And best of all: I get to deal with the database as a database. I write SQL DDL statements to define the schema, and SQL to perform the transactions. I don't have to pretend it is a object model, so I can make full use of the SQL. (Well, actually, I try to make do as far as possible with trivial SQL, but that's a whole different discussion). The interface type takes care of exposing the persistence in a way that fits the application.
(Another thing I've started experimenting with a bit is to return channels or objects containing channels instead of arrays of things. But there is still some experimenting that needs to be done to find a pleasing design)
- Show HN: Idea for unit testing with PostgreSQL in Go
pggen
-
Ask HN: ORM or Native SQL?
Cornucopia is neat. I wrote a similar library in Go [1] so I'm very interested in comparing design decisions.
The pros of the generated code per query approach:
- App code is coupled to query outputs and inputs (an API of sorts), not database tables. Therefore, you can refactor your DB without changing app code.
- Real SQL with the full breadth of DB features.
- Real type-checking with what the DB supports.
The cons:
- Type mapping is surprisingly hard to get right, especially with composite types and arrays and custom type converters. For example, a query might return multiple jsonb columns but the app code wants to parse them into different structs.
- Dynamic queries don't work with prepared statements. Prepared statements only support values, not identifiers or scalar SQL sub-queries, so the codegen layer needs a mechanism to template SQL. I haven't built this out yet but would like to.
[1]: https://github.com/jschaf/pggen
-
What are the things with Go that have made you wish you were back in Spring/.NET/Django etc?
pggen is another fantastic library in this genre, which specifically targets postgres. It is driven by pgx. Can not recommend enough.
-
Exiting the Vietnam of Programming: Our Journey in Dropping the ORM (In Golang)
> Do you write out 120 "INSERT" statements, 120 "UPDATE" statements, 120 "DELETE" statements as raw strings
Yes. For example: https://github.com/jschaf/pggen/blob/main/example/erp/order/....
> that is also using an ORM
ORM as a term covers a wide swathe of usage. In the smallest definition, an ORM converts DB tuples to Go structs. In common usage, most folks use ORM to mean a generic query builder plus the type conversion from tuples to structs. For other usages, I prefer the Patterns of Enterprise Application Architecture terms [1] like data-mapper, active record, and table-data gateway.
[1]: https://martinfowler.com/eaaCatalog/
-
Back to basics: Writing an application using Go and PostgreSQL
You might like pggen (I’m the author) which only supports Postgres and pgx. https://github.com/jschaf/pggen
pggen occupies the same design space as sqlc but the implementations are quite different. Sqlc figures out the query types using type inference in Go which is nice because you don’t need Postgres at build time. Pggen asks Postgres what the query types are which is nice because it works with any extensions and arbitrarily complex queries.
-
How We Went All In on sqlc/pgx for Postgres + Go
Any reason to use sqlc over pggen ? If you use Postgres, it seems like the superior option.
- We Went All in on Sqlc/Pgx for Postgres and Go
-
What are your favorite packages to use?
Agree with your choices, except go-json which I never tried. pggen is fantastic. Love that library. The underlying driver, pgx, is also really well written.
-
I don't want to learn your garbage query language
You might like the approach I took with pggen[1] which was inspired by sqlc[2]. You write a SQL query in regular SQL and the tool generates a type-safe Go querier struct with a method for each query.
The primary benefit of pggen and sqlc is that you don't need a different query model; it's just SQL and the tools automate the mapping between database rows and Go structs.
[1]: https://github.com/jschaf/pggen
[2]: https://github.com/kyleconroy/sqlc
-
What is the best way to use PostgreSQL with Go?
I created pggen a few weeks ago to create my preferred method of database interaction: I write real SQL queries and I use generated, type-safe Go interfaces to the queries. https://github.com/jschaf/pggen
What are some alternatives?
tcl
sqlc - Generate type-safe code from SQL
sqinn - SQLite over stdin/stdout
SQLBoiler - Generate a Go ORM tailored to your database schema.
xgo - Go CGO cross compiler
sqlpp11 - A type safe SQL template library for C++
sqlite - work in progress
pggen - A database first code generator focused on postgres
framework - PHP Framework providing ActiveRecord models and out of the box CRUD controllers with versioning and ORM support
SqlKata Query Builder - SQL query builder, written in c#, helps you build complex queries easily, supports SqlServer, MySql, PostgreSql, Oracle, Sqlite and Firebird
zeidon-joe - Zeidon Java Object Engine and related projects.
honeysql - Turn Clojure data structures into SQL