drydock
go-sqlite3
drydock | go-sqlite3 | |
---|---|---|
3 | 40 | |
6 | 7,471 | |
- | - | |
0.0 | 6.2 | |
almost 2 years ago | 4 days ago | |
Go | C | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
drydock
-
SQLite in Go, with and Without Cgo
I have been using SQLite in Go projects for a few years now. During early stages of development I always start with SQLite as the main database, then when the project matures, I usually add support for PostgreSQL.
(I usually make a Store interface which is application specific and doesn't even assume there is an SQL database underneath. Then I make "driver" packages for each storage system - be it PostgreSQL, SQLite, flat files, timeseries etc. I have only one set of unit tests that is then run against all drivers. And when I have a caching layer, I also run all the unit tests with or without caching. The cache is usually just an adapter that wraps a Store type. I maintain separate schemas and drivers for each "driver" because I have found that this is actually faster and easier than trying to make generic SQL drivers for instance.)
However, I always keep the SQLite support and it is usually the default when you start up the application without explicitly specifying a database. This means that it is easy for other developers to do ad-hoc experiments or even create integration tests without having to fire up a database, which even when you are able to do it quickly, still takes time and effort. In production you usually want to point to a PostgreSQL (or other) database. Usually, but not always.
I also use it extensively in unit tests (often creating and destroying in-memory databases hundreds of times during just a couple of seconds of tests). I run all my tests on every build while developing and then speed matters a lot. When testing with PostgreSQL I usually set a build tag that specifies that I want to run the tests against PostgreSQL as well. I always want to run all the database tests - I don't always need to run them against PostgreSQL
(Actually, I made a quick hack called Drydock which takes care of creating a PostgreSQL instance and creates one database per test. This is experimental, but I've gotten a lot of use out of it: https://github.com/borud/drydock)
The reason I do this is that it results in much quicker turnaround during the initial phase when the data model may go through several complete rewrites. The lack of friction is significant.
SQLite has actually surprised me. I use it in a project where I routinely have tens of millions of rows in the biggest table. And it still performs well enough at well north of 100M rows. I wouldn't recommend it in production, but for a surprising number of systems you could if you wanted to.
The transpiled SQLite is very interesting to me for two reasons. It makes cross compiling a lot less complex. I make extensive use of Go and SQLite on embedded ARM platforms and then you either have to choose between compiling on the target platform or mess around with C libraries. It also eliminates the need to do two stage Docker builds (which cuts down building Docker images from 50+ seconds to perhaps 4-5 seconds).
The transpiled version is slower by quite a lot. I haven't done a systematic benchmark, but I noticed that a server that stores 30-40 datapoints per second went from 0.5% average CPU load to about 2% average CPU load. I'm not terribly worried about it, but it does mean that when I increase the influx of data I'm most likely going to hit a wall sooner.
I'll be using the transpiled SQLite a lot more in the coming year and I'll be on the Gophers Slack so if anyone is interested in sharing experiences, discussing SQLite in Go, please don't be shy.
-
Exiting the Vietnam of Programming: Our Journey in Dropping the ORM (In Golang)
This isn't new. A lot of applications and libraries do this. And I think it is a good way to design things.
Usually the database I use to develop a SQL schema is Sqlite3, since it allows for really nice testing. Then I add PostgreSQL support (which requires more involved testing setup, but I have a library that makes this somewhat easier: https://github.com/borud/drydock). (SQLite being in C is a bit of a problem since it means I can't get a purely statically linked binary on all platforms - at least I haven't found a way to do that except on Linux. So if anyone has some opinions on alternatives in pure Go, I'm all ears)
In the Java days JDBC every single method implementing some operation would be a lot of boilerplate. JDBC wasn't a very good API. But in Go that is much less of a problem. In part because you have struct tags, and libraries like Sqlx. To that I also add some helper functions to deal with result/error combos. Turns out the majority of my interactions with SQL databases can be carried out in 1-3 lines of code - with a surprising number of cases just being a oneliner. (The performance hit from using Sqlx is in most cases so minimal it doesn't matter. If it matters to you: use Sqlx when modeling and evolving the persistence, and then optimize it out if you must. I think I've done that just once in about 100kLOC worth of code written over the last few years).
And best of all: I get to deal with the database as a database. I write SQL DDL statements to define the schema, and SQL to perform the transactions. I don't have to pretend it is a object model, so I can make full use of the SQL. (Well, actually, I try to make do as far as possible with trivial SQL, but that's a whole different discussion). The interface type takes care of exposing the persistence in a way that fits the application.
(Another thing I've started experimenting with a bit is to return channels or objects containing channels instead of arrays of things. But there is still some experimenting that needs to be done to find a pleasing design)
- Show HN: Idea for unit testing with PostgreSQL in Go
go-sqlite3
-
Show HN: Roast my SQLite encryption at-rest
SQLite encryption at-rest is a hot requested feature of both the “default” CGo driver [1] and the transpiled alternative driver [2]. So, this is a feature I wanted to bring to my own Wasm based Go driver/bindings [3].
Open-source SQLite encryption extensions have had a troubled last few years. For whatever reason, in 2020 the (undocumented) feature that made it easy to offer page-level encryption was removed [4]. Some solutions are stuck with SQLite 3.31.1, but Ulrich Telle stepped up with a VFS approach [5].
Still, their solution seemed harder than something I'd want to maintain, as it requires understanding the structure of what's being written to disk at the VFS layer. So, I looked at full disk encryption for something with less of an impedance mismatch.
Specifically, I'm using the Adiantum tweakable and length-preserving encryption (with 4K blocks, matching the default SQLite page size), and encrypting whole files (rather than page content).
I'm not a cryptographer, so I'd really appreciate some roasting before release.
There is nothing very Go specific about this (apart from the implementation) so if there are no obvious flaws, it may make sense to port it to C/Rust/etc and make it a loadable extension.
[1] https://github.com/mattn/go-sqlite3/pull/1109
-
Redis Re-Implemented with SQLite
for what it's worth, the two pool approach is suggested here by a collaborator to github.com/mattn/go-sqlite3: https://github.com/mattn/go-sqlite3/issues/1179#issuecomment...
-
Replacing Complicated Hashmaps with SQLite
SQLite is great. I've also recently settled on it as a key-value store, after considering a few purpose-built key-value solutions. Turns out that it's really easy to make SQLite work as a key-value store, but very difficult to make key-value stores relational.
Just be careful with `:memory:` databases. From the mattn/go-sqlite3 FAQ[1]:
> Each connection to ":memory:" opens a brand new in-memory sql database, so if the stdlib's sql engine happens to open another connection and you've only specified ":memory:", that connection will see a brand new database. A workaround is to use "file::memory:?cache=shared" (or "file:foobar?mode=memory&cache=shared"). Every connection to this string will point to the same in-memory database.
I noticed strange behaviors with just `:memory:` where tables would just disappear at random, and this workaround helped. Make sure to use a unique filename as the `file:` value, especially if using this in tests.
[1]: https://github.com/mattn/go-sqlite3#faq
-
What 3rd-party libraries do you use often/all the time?
github.com/mattn/go-sqlite3
-
From Golang Beginner to Building Basic Web Server in 4 Days!
For building my web server, I chose to use the Gin framework as the foundation of my app. It was incredibly easy to understand and work with, and I was pleasantly surprised by how seamlessly it integrated with writing unit tests for the server. To handle the database, I leveraged the power of go-sqlite and migrate for efficient SQL queries and migrations. These libraries proved to be both powerful and user-friendly, making the development process a breeze.
-
Zig now has built-in HTTP server and client in std
https://github.com/mattn/go-sqlite3/blob/master/_example/sim...
-
Exciting SQLite Improvements Since 2020
SQLite does have an optional "user authentication" extension, though I've not personally tried it out:
https://www.sqlite.org/src/doc/trunk/ext/userauth/user-auth....
The widely used Go SQLite library by mattn says it supports it, if that's useful:
https://github.com/mattn/go-sqlite3#user-authentication
-
Go port of SQLite without CGo
I have an OSS project, sq which is a data-wrangling swiss-army knife for structured data. Think of it as jq for databases. It supports Postgres, SQLServer, MySQL and - relevantly - SQLite. It embeds SQLite via CGo and the mattn/go-sqlite3 driver.
- In-memory key value store
-
Tools besides Go for a newbie
IDE: use whatever make you productive. I personally use vscode. VCS: git, as golang communities use github heavily as base for many libraries. AFAIK Linter: use staticcheck for linting as it looks like mostly used linting tool in go, supported by many also. In Vscode it will be recommended once you install go plugin. Libraries/Framework: actually the standard libraries already included many things you need, decent enough for your day-to-day development cycles(e.g. `net/http`). But here are things for extra: - Struct fields validator: validator - Http server lib: chi router , httprouter , fasthttp (for non standard http implementations, but fast) - Web Framework: echo , gin , fiber , beego , etc - Http client lib: most already covered by stdlib(net/http), so you rarely need extra lib for this, but if you really need some are: resty - CLI: cobra - Config: godotenv , viper - DB Drivers: sqlx , postgre , sqlite , mysql - nosql: redis , mongodb , elasticsearch - ORM: gorm , entgo , sqlc(codegen) - JS Transpiler: gopherjs - GUI: fyne - grpc: grpc - logging: zerolog - test: testify , gomock , dockertest - and many others you can find here
What are some alternatives?
tcl
GORM - The fantastic ORM library for Golang, aims to be developer friendly
sqinn - SQLite over stdin/stdout
sqlx - general purpose extensions to golang's database/sql
xgo - Go CGO cross compiler
pgx - PostgreSQL driver and toolkit for Go
sqlite - work in progress
go-sqlite - Low-level Go interface to SQLite 3
framework - PHP Framework providing ActiveRecord models and out of the box CRUD controllers with versioning and ORM support
go-sqlite-lite - SQLite driver for the Go programming language
zeidon-joe - Zeidon Java Object Engine and related projects.
Sqinn-Go - Golang SQLite without cgo