zstd
RocksDB
zstd | RocksDB | |
---|---|---|
109 | 44 | |
22,480 | 27,448 | |
1.7% | 0.9% | |
9.7 | 9.8 | |
7 days ago | 6 days ago | |
C | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zstd
-
Rethinking string encoding: a 37.5% space efficient encoding than UTF-8 in Fury
> In such cases, the serialized binary are mostly in 200~1000 bytes. Not big enough for zstd to work
You're not referring to the same dictionary that I am. Look at --train in [1].
If you have a training corpus of representative data, you can generate a dictionary that you preshare on both sides which will perform much better for very small binaries (including 200-1k bytes).
If you want maximum flexibility (i.e. you don't know the universe of representative messages ahead of time or you want maximum compression performance), you can gather this corpus transparently as messages are generated & then generate a dictionary & attach it as sideband metadata to a message. You'll probably need to defer the decoding if it references a dictionary not yet received (i.e. send delivers messages out-of-order from generation). There are other techniques you can apply, but the general rule is that your custom encoding scheme is unlikely to outperform zstd + a representative training corpus. If it does, you'd need to actually show this rather than try to argue from first principles.
[1] https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md
-
Drink Me: (Ab)Using a LLM to Compress Text
> Doesn't take large amount of GPU resources
This is an understatement, zstd dictionary compression and decompression are blazingly fast: https://github.com/facebook/zstd/blob/dev/README.md#the-case...
My real-world use case for this was JSON files in a particular schema, and the results were fantastic.
-
SQLite VFS for ZSTD seekable format
This VFS will read a sqlite file after it has been compressed using [zstd seekable format](https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...). Built to support read-only databases for full-text search. Benchmarks are provided in README.
-
Chrome Feature: ZSTD Content-Encoding
Of course, you may get different results with another dataset.
gzip (zlib -6) [ratio=32%] [compr=35Mo/s] [dec=407Mo/s]
zstd (zstd -2) [ratio=32%] [compr=356Mo/s] [dec=1067Mo/s]
NB1: The default for zstd is -3, but the table only had -2. The difference is probably small. The range is 1-22 for zstd and 1-9 for gzip.
NB2: The default program for gzip (at least with Debian) is the executable from zlib. With my workflows, libdeflate-gzip iscompatible and noticably faster.
NB3: This benchmark is 2 years old. The latest releases of zstd are much better, see https://github.com/facebook/zstd/releases
For a high compression, according to this benchmark xz can do slightly better, if you're willing to pay a 10× penalty on decompression.
xz -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]
zstd -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]
- Zstandard v1.5.6 – Chrome Edition
-
Optimizating Rabin-Karp Hashing
Compression, synchronization and backup systems often use rolling hash to implement "content-defined chunking", an effective form of deduplication.
In optimized implementations, Rabin-Karp is likely to be the bottleneck. See for instance https://github.com/facebook/zstd/pull/2483 which replaces a Rabin-Karp variant by a >2x faster Gear-Hashing.
- Show HN: macOS-cross-compiler – Compile binaries for macOS on Linux
-
Cyberpunk 2077 dev release
Get the data https://publicdistst.blob.core.windows.net/data/root.tar.zst magnet:?xt=urn:btih:84931cd80409ba6331f2fcfbe64ba64d4381aec5&dn=root.tar.zst How to extract https://github.com/facebook/zstd Linux (debian): `sudo apt install zstd` ``` tar -I 'zstd -d -T0' -xvf root.tar.zst ```
-
Honey, I shrunk the NPM package · Jamie Magee
I've done that experiment with zstd before.
https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md...
Not sure about brotli though.
-
How in the world should we unpack archive.org zst files on Windows?
If you want this functionality in zstd itself, check this out: https://github.com/facebook/zstd/pull/2349
RocksDB
-
How to choose the right type of database
RocksDB: A high-performance embedded database optimized for multi-core CPUs and fast storage like SSDs. Its use of a log-structured merge-tree (LSM tree) makes it suitable for applications requiring high throughput and efficient storage, such as streaming data processing.
-
Fast persistent recoverable log and key-value store
[RocksDB](https://rocksdb.org/) isn’t a distributed storage system, fwiw. It’s an embedded KV engine similar to LevelDB, LMDB, or really sqlite (though that’s full SQL, not just KV)
-
The Hallucinated Rows Incident
To output the top 3 rocks, our engine has to first store all the rocks in some sorted way. To do this, we of course picked RocksDB, an embedded lexicographically sorted key-value store, which acts as the sorting operation's persistent state. In our RocksDB state, the diffs are keyed by the value of weight, and since RocksDB is sorted, our stored diffs are automatically sorted by their weight.
-
In-memory vs. disk-based databases: Why do you need a larger than memory architecture?
The in-memory version of Memgraph uses Delta storage to support multi-version concurrency control (MVCC). However, for larger-than-memory storage, we decided to use the Optimistic Concurrency Control Protocol (OCC) since we assumed conflicts would rarely happen, and we could make use of RocksDB’s transactions without dealing with the custom layer of complexity like in the case of Delta storage.
-
Local file non relational database with filter by value
I was looking at https://github.com/facebook/rocksdb/ but it seems to not allow queries by value, as my last requirmenet.
- Rocksdb over network
-
How RocksDB Works
Tuning RocksDB well is a very very hard challenge, and one that I am happy to not do day to day anymore. RocksDB is very powerful but it comes with other very sharp edges. Compaction is one of those, and all answers are likely workload dependent.
If you are worried about write amplification then leveled compactions are sub-optimal. I would try the universal compaction.
- https://github.com/facebook/rocksdb/wiki/Universal-Compactio...
-
What are the advantages of using Rust to develop KV databases?
It's fairly challenging to write a KV database, and takes several years of development to get the balance right between performance and reliability and avoiding data loss. Maybe read through the documentation for RocksDB https://github.com/facebook/rocksdb/wiki/RocksDB-Overview and watch the video on why it was developed and that may give you an impression of what is involved.
-
We’re the Meilisearch team! To celebrate v1.0 of our open-source search engine, Ask us Anything!
LMDB is much more sain in the sense that it supports real ACID transactions instead of savepoints for RocksDB. The latter is heavy and consumes a lot more memory for a lot less read throughput. However, RocksDB has a much better parallel and concurrent write story, where you can merge entries with merge functions and therefore write from multiple CPUs.
-
Google's OSS-Fuzz expands fuzz-reward program to $30000
https://github.com/facebook/rocksdb/issues?q=is%3Aissue+clic...
Here are some bugs in JeMalloc:
What are some alternatives?
LZ4 - Extremely Fast Compression algorithm
LevelDB - LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.
Snappy - A fast compressor/decompressor
LMDB - Read-only mirror of official repo on openldap.org. Issues and pull requests here are ignored. Use OpenLDAP ITS for issues.
LZMA - (Unofficial) Git mirror of LZMA SDK releases
SQLite - Unofficial git mirror of SQLite sources (see link for build instructions)
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard
sled - the champagne of beta embedded databases
ZLib - A massively spiffy yet delicately unobtrusive compression library.
ClickHouse - ClickHouse® is a free analytics DBMS for big data
brotli - Brotli compression format
TileDB - The Universal Storage Engine