crdt-benchmarks
fdb-document-layer
Our great sponsors
crdt-benchmarks | fdb-document-layer | |
---|---|---|
8 | 5 | |
397 | 204 | |
- | 0.5% | |
0.0 | 0.0 | |
2 months ago | almost 3 years ago | |
JavaScript | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
crdt-benchmarks
-
JSON-joy CRDT benchmarks, 100x speed improvement over state-of-the-art
Author of Yjs here. I'm all for faster data structures. But only benchmarking one dimension looks quite fishy to me. A CRDT needs to be adequate at multiple dimensions. At least you should describe the tradeoffs in your article.
The time to insert characters is the least interesting property of a CRDT. It doesn't matter to the user whether a character is inserted within .1ms or .000000001ms. No human can type that fast.
It would be much more interesting to benchmark the time it takes to load a document containing X operations. Yjs & Yrs are pretty performant and conservative on memory here because they don't have to build an index (it's a tradeoff that we took consciously).
When benchmarking it is important to measure the right things and interpret the results somehow so that you can give recommendations when to use your algorithm / implementation. Some things can't be fast/low enough (e.g. time to load a document, time to apply updates, memory consumption, ..) other things only need to be adequate (e.g. time to insert a character into a document).
Unfortunately, a lot of academic papers set a bad trend of only measuring one dimension. Yeah, it's really easy to succeed in one dimension (e.g. memory or insertion-time) and it is very nice click-bait. But that doesn't make your CRDT a viable option in practice.
I maintain a set of benchmarks that tests multiple dimensions [1]. I'd love to receive a PR from you.
-
CRDT-richtext: Rust implementation of Peritext and Fugue
Diamond types author here! Congratulations on getting your crdt working! It’s lovely to see a new generation of CRDTs which have decent performance.
And nice stuff implementing peritext! I’d love to do the same in diamond types at some point. You beat me to it!
Im building a little repository of real world collaborative editing traces to use when benchmarking, comparing and optimising text based CRDTs[1]. The automerge-perf editing trace isn’t enough on its own. And we’re increasingly converging on a format for multi user concurrent editing traces too[2]. It’d be great to add some rich text editing traces in the mix if you’re interested in recording something, so we can also compare how peritext performs in different systems.
Anyway, welcome to the community! Love to have more implementations around!
-
Cloudant/IBM back off from FoundationDB based CouchDB rewrite
So yes, a particularly large document is not the norm but it can happen.
JavaScript CRDTs can be quite performant, see the Yjs benchmarks: https://github.com/dmonad/crdt-benchmarks
- Automerge: A JSON-like data structure (a CRDT) that can be modified concurrently
- Automerge: a new foundation for collaboration software [video]
- Show HN: SyncedStore CRDT – build multiplayer collaborative apps for React / Vue
- 5000x Faster CRDTs: An Adventure in Optimization
fdb-document-layer
-
Turning SQLite into a Distributed Database
This is exactly what the engineers behind FoundationDB (FDB) wanted when they open sourced. For those who don't know, FDB provides a transactional (and distributed) ordered key-value store with a somewhat simple but very powerful API.
Their vision was to build the hardest parts of building a database, such as transactions, fault-tolerance, high-availability, elastic scaling, etc. This would free users to build higher-level APIs (Layers) APIs [1] / libraries [2] on top.
The beauty of these layers is that you can basically remove doubt about about the correctness of data once it leaves the layer. FoundationDB is one of the most (if not the) most tested databases out there. I used it for over 4 years in high write / read production environments and never once did we second guess our decision.
I could see this project renamed to simply "fdb-sqlite-layer"
-
Cloudant/IBM back off from FoundationDB based CouchDB rewrite
https://github.com/FoundationDB/fdb-document-layer .and you get the transaction Al integrity.
I stopped using MongoDB and switched to this.
- FoundationDB Document Layer
- A truly open-source MongoDB alternative
- FoundationDB: A Distributed, Unbundled, Transactional Key Value Store [pdf]
What are some alternatives?
automerge - A JSON-like data structure (a CRDT) that can be modified concurrently by different users, and merged again automatically.
mvsqlite - Distributed, MVCC SQLite that runs on FoundationDB.
diamond-types - The world's fastest CRDT. WIP.
foundationdb - FoundationDB - the open source, distributed, transactional key-value store
electric - Local-first sync layer for web and mobile apps. Build reactive, realtime, local-first apps directly on Postgres.
badger - Fast key-value DB in Go.
teletype-crdt - String-wise sequence CRDT powering peer-to-peer collaborative editing in Teletype for Atom.
wasmer-postgres - 💽🕸 Postgres library to run WebAssembly binaries.
y-crdt - Rust port of Yjs
npm-registry-couchapp - couchapp bits of registry.npmjs.org
automerge-rs - Rust implementation of automerge [Moved to: https://github.com/automerge/automerge]
mosql - MongoDB → PostgreSQL streaming replication