marmot
dqlite
marmot | dqlite | |
---|---|---|
33 | 33 | |
1,628 | 3,716 | |
- | 0.9% | |
8.6 | 9.5 | |
3 months ago | 7 days ago | |
Go | C | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
marmot
-
Distributed SQLite: Paradigm shift or hype?
If you're willing to accept eventual consistency (a big ask, but acceptable in some scenarios) then there are options like marmot [1] that replicate cdc over nats.
[1]: https://github.com/maxpert/marmot
- Marmot: Multi-writer distributed SQLite based on NATS
- Why you should probably be using SQLite
-
The Raft Consensus Algorithm
I've written a whole SQLite replication system that works on top of RAFT ( https://github.com/maxpert/marmot ). Best part is RAFT has a well understood and strong library ecosystem as well. I started of with libraries and when I noticed I am reimplementing distributed streams, I just took off the shelf implementation (https://docs.nats.io/nats-concepts/jetstream) and embedded it in system. I love the simplicity and reasoning that comes with RAFT. However I am playing with epaxos these days (https://www.cs.cmu.edu/~dga/papers/epaxos-sosp2013.pdf), because then I can truly decentralize the implementation for truly masterless implementation. Right now I've added sharding mechanism on various streams so that in high load cases masters can be distributed across nodes too.
-
SQLedge: Replicate Postgres to SQLite on the Edge
Very interesting! I have question ( out of my experience in https://github.com/maxpert/marmot ) how do get around the boot time, specially when a change log of table is pretty large in Postgres? I've implemented snapshotting mechanism in Marmot as part of quickly getting up to speed. At some level I wonder if we can just feed this PG replication log into NATS cluster and Marmot can just replicate it across the board.
-
Show HN: Blueprint for a distributed multi-region IAM with Go and CockroachDB
One of the reasons I started writing Marmot (https://maxpert.github.io/marmot/) was for replicating bunch of tables across regions that were read heavy. I even used it for cache replication (because who cares if it’s a cache miss, but a hit will save me time and money). It’s hard to make such blue prints in early days of product, and by the time you hit a true growth almost everyone builds a custom solution for multi-region IAM.
-
Stalwart All-in-One Mail Server (IMAP, JMAP, SMTP)
Amazing I was just looking for a good mail server to configure for my demo. Which reminds me since you folks have mentioned LiteStream, have you tried Marmot (https://github.com/maxpert/marmot); I recently configured Isso with Marmot to scale it out horizontally (https://maxpert.github.io/marmot/demo). I am super curious what kind of write workload on a sub thousand people organization will have and if Marmot can help scale it horizontally without Foundation DB. I always find the the convenience of SQLite amazing.
- Marmot: A distributed SQLite replicator built on top of NATS
-
LiteFS Cloud: Distributed SQLite with Managed Backups
Great that you brought it up. I will fill in the perspective of what I am doing for solving this in Marmot (https://github.com/maxpert/marmot). Today Marmot already records changes via installing triggers to record changes of a table, hence all the offline changes (while Marmot is not running) are never lost. Today when Marmot comes up after a long offline (depending upon max_log_size configuration), it realizes that and tries to catch up changes via restoring a snapshot and then applying rest of logs from NATS (JetStream) change logs. I am working on change that will be publishing those change logs to NATS before it restores snapshots, and once it reapplies those changes after restoring snapshot everyone will have your changes + your DB will be up to date. Now in this case one of the things that bothers people is the fact that if two nodes coming up with conflicting rows the last writer wins.
For that I am also exploring on SQLite-Y-CRDT (https://github.com/maxpert/sqlite-y-crdt) which can help me treat each row as document, and then try to merge them. I personally think CRDT gets harder to reason sometimes, and might not be explainable to an entry level developers. Usually when something is hard to reason and explain, I prefer sticking to simplicity. People IMO will be much more comfortable knowing they can't use auto incrementing IDs for particular tables (because two independent nodes can increment counter to same values) vs here is a magical way to merge that will mess up your data.
dqlite
-
Marmot: Multi-writer distributed SQLite based on NATS
If you're interested in this, here are some related projects that all take slightly different approaches:
- LiteSync directly competes with Marmot and supports DDL sync, but is closed source commercial (similar to SQLite EE): https://litesync.io
- dqlite is Canonical's distributed SQLite that depends on c-raft and kernel-level async I/O: https://dqlite.io
- cr-sqlite is a Rust-based loadable extension that adds CRDT changeset generation and reconciliation to SQLite: https://github.com/vlcn-io/cr-sqlite
Slightly related but not really (no multi writer, no C-level SQLite API or other restrictions):
- comdb2 (Bloombergs multi-homed RDMS using SQLite as the frontend)
- rqlite: RDMS with HTTP API and SQLite as the storage engine, used for replication and strong consistency (does not scale writes)
- litestream/LiteFS: disaster recovery replication
- liteserver: active read-only replication (predecessor of LiteSync)
- I'm All-In on Server-Side SQLite
-
SQLite performance tuning: concurrent reads, multiple GBs and 100k SELECTs/s
I'd be curious for a similar tuning with Dqlite: https://github.com/canonical/dqlite
- Strong Consistency with Raft and SQLite
-
9 years of open-source database development: reviewing the designs
Anyone knows how the DB this is about, https://rqlite.io/, compares with https://dqlite.io/ by Canonical (both seem to be distributed versions of sqlite)?
- SQLite the only database you will ever need in most cases
-
Transcending Posix: The End of an Era?
For folks' context, the new tool that's being discussed in the thread mentioned by the parent here is litefs [0], as well as which you can also look at rqlite [1] and dqlite [2], which all provide different trade-offs (e.g. rqlite is 'more strongly consistent' than litefs).
[0]: https://github.com/superfly/litefs
[1]: https://github.com/rqlite/rqlite
[2]: https://github.com/canonical/dqlite
-
SQLite is not a toy database
I presume you're familiar with https://github.com/canonical/dqlite (made by my employer) and https://github.com/rqlite/rqlite (unrelated)? How will mvsqlite compare to those?
-
GitDB, a distributed embeddable database on top of Git
Check out dqlite, it's sqlite but with a raft consensus to distribute changes through a log: https://dqlite.io/ You can link it in as a library too, it sounds like exactly what you want.
- Ask HN: Free and open source distributed database written in C++ or C
What are some alternatives?
pocketbase - Open Source realtime backend in 1 file
rqlite - The lightweight, distributed relational database built on SQLite.
cr-sqlite - Convergent, Replicated SQLite. Multi-writer and CRDT support for SQLite
kine - Run Kubernetes on MySQL, Postgres, sqlite, dqlite, not etcd.
litefs - FUSE-based file system for replicating SQLite databases across a cluster of machines
better-sqlite3 - The fastest and simplest library for SQLite3 in Node.js.
wordpress-playground - Run WordPress in the browser via WebAssembly PHP
litestream - Streaming replication for SQLite.
mssql-changefeed
boringproxy - Simple tunneling reverse proxy with a fast web UI and auto HTTPS. Designed for self-hosters.
Bedrock - Rock solid distributed database specializing in active/active automatic failover and WAN replication