-
I was really really excited when I saw the title because I've been having a lot of difficulties with other Go SQL libraries, but the caveats section gives me pause.
Needing to use arrays for the IN use case (see https://github.com/kyleconroy/sqlc/issues/216) and the bulk insert case feel like large divergences from what "idiomatic SQL" looks like. It means that you have to adjust how you write your queries. And that can be intimidating for new developers.
The conditional insert case also just doesn't look particularly elegant and the SQL query is pretty large.
sqlc also just doesn't look like it could help with very dynamic queries I need to generate - I work on a team that owns a little domain-specific search engine. The conditional approach could in theory with here, but it's not good for the query planner: https://use-the-index-luke.com/sql/where-clause/obfuscation/...
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
xo
Command line tool to generate idiomatic Go code for SQL databases supporting PostgreSQL, MySQL, SQLite, Oracle, and Microsoft SQL Server
I've used https://github.com/xo/xo, extended it with some custom functions for templating, extended the templates themselves, and can now generate CRUD for anything in the database, functions for common select queries based on the indices that exist in the database, field filtering and scanning, updates for subsets of fields including some atomic operations, etc. The sky is the limit honestly. It has allowed me to start with something approximating a statically generated ORM and extend it with any features I want as time goes on. I also write .extra.go files along side the generated .xo.go files to extend the structs that are generated with custom logic and methods to convert data into response formats.
I like the approach of starting with the database schema and generating code to reflect that. I define my schema in sql files and handle database migrations using https://github.com/golang-migrate/migrate.
If you take this approach, you can mostly avoid exposing details about the SQL driver being used, and since the driver is mostly used by a few templates, swapping drivers doesn't take much effort.
-
pggen
Generate type-safe Go for any Postgres query. If Postgres can run the query, pggen can generate code for it.
I agree whole-heartedly that writing SQL feels right. Broadly speaking, you can take the following approaches to mapping database queries to Go code:
- Write SQL queries, parse the SQL, generate Go from the queries (sqlc, pggen).
- Write SQL schema files, parse the SQL schema, generate active records based on the tables (gorm)
- Write Go structs, generate SQL schema from the structs, and use a custom query DSL (proteus).
- Write custom query language (YAML or other), generate SQL schema, queries, and Go query interface (xo).
- Skip generated code and use a non-type-safe query builder (squirrel, goqu).
I prefer writing SQL queries so that app logic doesn't depend on the the database table structure.
I started off with sqlc but ran into limitations with more complex queries. It's quite difficult to infer what a SQL query will output even with a proper parse tree. sqlc also didn't work with generated code.
I wrote pggen with the idea that you can just execute the query and have Postgres tell you what the output types and names will be. Here's the original design doc [1] that outlines the motivations. By comparison, sqlc starts from the parse tree, and has the complex task of computing the control flow graph for nullability and type outputs.
[1]: https://docs.google.com/document/d/1NvVKD6cyXvJLWUfqFYad76CW...
Disclaimer: author of pggen (https://github.com/jschaf/pggen), inspired by sqlc
-
If you want a code generator like this that has support for that kind of thing, https://github.com/opendoor/pggen can automatically infer these kinds of relationships based on foreign key relationships and emit slices of pointers to connect the records together in memory. It can even figure out 1-1 relationships if there is a UNIQUE index on the foreign key. There is a little mini-DSL for specifying exactly how much of the transitive closure of a given record you want to get filled in for you.
-
Good point. I had assumed Rows.Scan() would have just used type switches for efficiency -- it looks like it does for common cases (https://github.com/golang/go/blob/d62866ef793872779c9011161e...) but then falls back to reflect. I wonder why it doesn't just do all of that with type switches? Maybe there are just too many cases and it ends up slower than reflect for the rest of the cases.
Scanner.Scan() is actually just called via a type assertion, though I guess implementations of Scan() might use reflection.
-
I was expecting the article to contain a note about SQLBoiler (https://github.com/volatiletech/sqlboiler) and why they didn't use it, but there is no. So I was expecting SQLBoiler to be heavily mentioned in the comments, but it's not the case.
I you want to see a (slightly heated) debate about `sqlc` versus SQLBoiler with their respective creators: https://www.reddit.com/r/golang/comments/e9bvrt/sqlc_compile...
Note that SQLBoiler does not seem to be compatible with `pgx`.
-
I've used https://github.com/xo/xo, extended it with some custom functions for templating, extended the templates themselves, and can now generate CRUD for anything in the database, functions for common select queries based on the indices that exist in the database, field filtering and scanning, updates for subsets of fields including some atomic operations, etc. The sky is the limit honestly. It has allowed me to start with something approximating a statically generated ORM and extend it with any features I want as time goes on. I also write .extra.go files along side the generated .xo.go files to extend the structs that are generated with custom logic and methods to convert data into response formats.
I like the approach of starting with the database schema and generating code to reflect that. I define my schema in sql files and handle database migrations using https://github.com/golang-migrate/migrate.
If you take this approach, you can mostly avoid exposing details about the SQL driver being used, and since the driver is mostly used by a few templates, swapping drivers doesn't take much effort.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
It's not really pure go, it's transpiled using https://gitlab.com/cznic/ccgo
Just about all the code looks like this:
// Call this routine to record the fact that an OOM (out-of-memory) error
-
If you are looking for a way to map SQL queries to type safe Go functions, take a look at my library Proteus: https://github.com/jonbodner/proteus
Proteus generates functions at runtime, avoiding code generation. Performance is identical to writing SQL mapping code yourself. I spoke about its implementation at GopherCon 2017: https://www.youtube.com/watch?v=hz6d7rzqJ6Q
-
As an alternative I suggest people to look at https://github.com/go-jet/jet. I had a good experience working with it and the author is quite responsive.
It really feels like writing SQL but you are writing typesafe golang which I really enjoy doing.
-
windows amd64
and if you look at their source tree https://gitlab.com/cznic/sqlite/-/tree/master/lib you can see they have
sqlite_darwin_amd64.go
-
-
I like SQL queries as strings but I also like my IDE to syntax check them ... Since there are already so many kinds to projects in this thread I'll happily introduce fileconst which provides the best of both worlds - https://github.com/PennState/fileconst.
-
I attempted to make something similar to this except the opposite direction at a previous job. It was called Pronto: https://github.com/CaperAi/pronto/
It allowed us to store and query Protos into MongoDB. It wasn't perfect (lots of issues) but the idea was rather than specifying custom models for all of our DB logic in our Java code we could write a proto and automatically and code could import that proto and read/write it into the database. This made building tooling to debug issues very easy and make it very simple to hide a DB behind a gRPC API.
The tool automated the boring stuff. I wish I could have extended this to have you define a service in a .proto and "compile" that into an ORM DAO-like thing automatically so you never need to worry about manually wiring that stuff ever again.
-
-
This looks better than typical ORMs, but still not giving me what I want.
I want query objects to be composable, and mutable. That lets you do things like this: http://btubbs.com/postgres-search-with-facets-and-location-a.... sqlc would force you to write a separate query for each possible permutation of search features that the user opts to use.
I like the "query builder" pattern you get from Goqu. https://github.com/doug-martin/goqu
-
> - Write custom query language (YAML or other), generate SQL schema, queries, and Go query interface (xo).
I've also tried to unify this approach with gRPC/Protobuf messages and CRUD operations: https://github.com/sashabaranov/pike/
-
- generate code from running database and use a type-safe query builder (https://github.com/bokwoon95/go-structured-query)
-
goyesql
Parse SQL files with multiple named queries and automatically prepare and scan them into structs.
sqlc looks very interesting and compelling. A similar library I like but haven't had the chance to really use is goyesql: https://github.com/knadh/goyesql
It also allows just writing SQL in a file, reminds me a bit of JDBI in Java.
-
upper.io/db
Data Access Layer (DAL) for PostgreSQL, CockroachDB, MySQL, SQLite and MongoDB with ORM-like features.
this is the reason why I chose upper/db over pgx/sqlc for my current cockroachdb side project
while upper/db is not as type safe, with proper testing infrastructure, it's the closest to django due to its simplicity/composability/query building support
i'm also excited to see how upper/db grows after generics land in Go later this year
https://github.com/upper/db
https://upper.io/
-
I uploaded most of our Workspace setup here: https://github.com/jschaf/bazel-postgres-sketch. The tooling is a bunch of Go to manage the Postgres process. Basically, give it schema files and receive a running database with a tear down command.
We make temp instances of Postgres quickly by:
- avoiding Docker, especially on Mac
- keeping the data dir on tmpfs
- Disable initdb cleanup
- Disable fsync and other data integrity flags
- Use unlogged tables.
- Use sockets instead of TCP localhost.
For a test suite, it was 12x faster to call createdb with the same Postgres cluster for each test than than to create a whole new db cluster. The trick was to create a template database after loading the schema and use that for each createdb call.
-
Cool, thanks for the link.
For what it's worth, we use rules_nixpkgs to source Postgres (for Linux and Darwin) as well as things such as C and Python toolchains, and it's been working really well. It does require that the machine have Nix installed, though, but that opens up access to Nix's wide array of prebuilt packages.
https://github.com/tweag/rules_nixpkgs
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives