tailetc
lungo
Our great sponsors
tailetc | lungo | |
---|---|---|
2 | 2 | |
131 | 448 | |
- | - | |
0.0 | 5.0 | |
almost 2 years ago | about 1 month ago | |
Go | Go | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tailetc
-
Sched - In-process Go Job Scheduler. With Cron Support and Prometheus Metrics
https://github.com/tailscale/tailetc/blob/b2fa539c2383d30d03e0eea1052022af132dca9f/tailetc.go#L142
-
An Unlikely Database Migration
Interesting choice of technology, but you didn't completely convince me to why this is better than just using SQLite or PostgreSQL with a lagging replica. (You could probably start with either one and easily migrate to the other one if needed.)
In particular you've designed a very complicated system: Operationally you need an etcd cluster and a tailetc cluster. Code-wise you now have to maintain your own transaction-aware caching layer on top of etcd (https://github.com/tailscale/tailetc/blob/main/tailetc.go). That's quite a brave task considering how many databases fail at Jepsen. Have you tried running Jepsen tests on tailetc yourself? You also mentioned a secondary index system which I assume is built on top of tailetc again? How does that interact with tailetc?
Considering that high-availability was not a requirement and that the main problem with the previous solution was performance ("writes went from nearly a second (sometimes worse!) to milliseconds") it looks like a simple server with SQLite + some indexes could have gotten you quite far.
We don't really get the full overview from a short blog post like this though so maybe it turns out to be a great solution for you. The code quality itself looks great and it seems that you have thought about all of the hard problems.
lungo
- Show HN: Mongita is to MongoDB as SQLite is to SQL
-
An Unlikely Database Migration
I found myself in a similar situation sometime ago with MongoDB. In one project my unit tests started slowing me down too much to be productive. In another, I had so little data that running a server alongside it was a waste of resources. I invested a couple of weeks in developing a SQLite type of library[1] for Go that implemented the official Go drivers API with a small wrapper to select between the two. Up until now, it paid huge dividends in both projects ongoing simplicity and was totally worth the investment.
[1]: https://github.com/256dpi/lungo
What are some alternatives?
go-memdb - Golang in-memory database built on immutable radix trees
mongita - "Mongita is to MongoDB as SQLite is to SQL"
etcd - Distributed reliable key-value store for the most critical data of a distributed system
indradb - A graph database written in rust
mongodb-memory-server - Spinning up mongod in memory for fast tests. If you run tests in parallel this lib helps to spin up dedicated mongodb servers for every test file in MacOS, *nix, Windows or CI environments (in most cases with zero-config).
sortedcontainers - Python Sorted Container Types: Sorted List, Sorted Dict, and Sorted Set
lua-mongo - MongoDB Driver for Lua
SQLBoiler - Generate a Go ORM tailored to your database schema.
mongoifc - The implementation of the interfaces for the official MongoDB driver in Go
homelab - Brad's homelab setup