pigeon
automerge-perf
pigeon | automerge-perf | |
---|---|---|
4 | 2 | |
54 | 35 | |
- | - | |
0.0 | 3.2 | |
over 1 year ago | 7 months ago | |
JavaScript | JavaScript | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pigeon
-
You might not need a CRDT
We have used Automerge a bunch, but found that there is a threshold where beyond a given document size, performance gets exponentially worse, until even trivial updates take many seconds' worth of CPU. That is often how it works when the document end state is exclusively the sum of all the edits that have ever happened.
Our answer was to reimplement the Automerge API with different mechanics underneath that allows for a "snapshots + recent changes" paradigm, instead of "the doc is the sum of all changes". That way performance doesn't have to degrade over time as changes accumulate.
Project is here: https://github.com/frameable/pigeon, with some benchmarks: https://github.com/frameable/pigeon/wiki/Benchmarks in the wiki...
-
Pigeon - Fast diff, patch, merge, and synchronizing JSON documents with an Automerge-compatible interface in JavaScript
Some benchmarks are here: https://github.com/frameable/pigeon/wiki/Benchmarks
- Show HN: Pigeon–Fast diff, patch, merge for JSON with an Automerge-like API
automerge-perf
-
Announcing crop, the fastest UTF-8 text rope for Rust
The automerge folks have a real-life editing history of a large document in their benchmarks: https://github.com/automerge/automerge-perf
-
You might not need a CRDT
This is an implementation problem with automerge. I wrote a blog post last year about CRDT performance. I re-ran the benchmarks a couple months ago. Automerge has improved a lot since then, but a simple benchmark test (automerge-perf[1]) still takes 200MB of RAM using automerge-rs. Yjs and Diamond types can run the same benchmark in 4mb / 2mb of ram respectively.
I've had a chat with some of the automerge people about it. They're working on it, and I've shared the techniques I'm using in diamond types (and all the code). Its just an implementation bottleneck.
[1] https://github.com/automerge/automerge-perf/
What are some alternatives?
plane - A distributed system for running WebSocket services at scale.
jumprope-rs
peritext - A CRDT for asynchronous rich-text collaboration, where authors can work independently and then merge their changes.
aper - A Rust data structure library built on state machines.
crop - 🌾 A pretty fast text rope
statebox_riak - Convenience library that makes it easier to use statebox with riak, extracted from best practices in our production code at Mochi Media.
jdd - A semantic JSON compare tool
diamond-types - The world's fastest CRDT. WIP.