LZ4
RocksDB
Our great sponsors
- ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
- InfluxDB - Access the most powerful time series database as a service
- SonarQube - Static code analysis for 29 languages.
- CodiumAI - TestGPT | Generating meaningful tests for busy devs
LZ4 | RocksDB | |
---|---|---|
21 | 37 | |
8,294 | 25,271 | |
2.6% | 1.6% | |
8.7 | 9.8 | |
8 days ago | 6 days ago | |
C | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LZ4
-
Rsyncing 20TB locally
According to these https://github.com/lz4/lz4 values you need around ten (10) quite modern cores in parallel to accomplish around 8GB/s.
- Cerbios Xbox Bios V2.2.0 BETA Released (1.0 - 1.6)
-
zstd
> The downside of lz4 is that it can’t be configured to run at higher & slower compression ratios.
lz4 has some level of configurability? https://github.com/lz4/lz4/blob/v1.9.4/lib/lz4frame.h#L194
There's also LZ4_HC.
-
I'm new to this
Get your bootloader unlocked via Download mode and then obtain your stock firmware, preferably for your current region https://samfw.com (Download mode: CARRIER_CODE). Get the boot image from AP with 7zip, unpack from LZ4 with https://github.com/lz4/lz4/releases (drag and drop), patch with Magisk https://github.com/topjohnwu/magisk/releases/latest, grab the new image, name it "boot.img" and pack it into a .tar with 7zip and flash to AP with odin https://odindownload.com
-
An efficient image format for SDL
After some investigations and experiments, I found out that it was the PNG compression (well, decompression I should say) that took a while. So I've made some experiments using the LZ4 compression library, which is focused on decompression speed, and it turned out to be an excellent solution!
-
Bzip3 – a better and stronger spiritual successor to bzip2
If anyone just cares for speed instead of compression I’d recommend lz4 [1]. I only recently started using it. Its speed is almost comparable to memcpy.
-
I just took a random screenshot and made it look prettier. [ I don't know if this counts as fanart ]
E: Realtime compression (A good compression library like Zstandard can make a game less than half the size while taking a tiny amount of CPU power when loading stuff. I think thats a pretty worthwhile trade.) (ZSTD github) (LZ4 github)
-
What's the best way to compress strings?
lz4 for maximum decompression speed, for data that is often read but rarely written
-
How to become a tools/graphics/engine programmer
Getting lost in material models is tempting. But, at this point you are overdue for working on your own asset pipeline. glTF is great. But, you should learn how to do it yourself. The hardest part will be reading source asset files. The FBX SDK is painful. Assimp isn't great either. Writing your own exporter to your own intermediate text format from Maya or Blender would be good if you are up for it. From whatever source, make your own archive format and binary formats for meshes, animations, textures and scenes. Use https://github.com/lz4/lz4 for compression. You should be able to decompress a list of assets into a big linear array and use them right there with just a bit of pointer fix-up. Minimize the amount of memory you have to touch from start to finish. Data that is going to the GPU (textures, vertex/index buffers) should decompress straight into mapped buffers for fast uploads.
-
LZ4, an Extremely Fast Compression Algorithm
I'm not a fan of the stacked bar charts, I like the table of data for "Benchmarks" on the github source page: https://github.com/lz4/lz4
It makes it very clear where LZ4 fits into comparisons with compression speed, decompression speed and compression ratio
RocksDB
-
How RocksDB Works
Tuning RocksDB well is a very very hard challenge, and one that I am happy to not do day to day anymore. RocksDB is very powerful but it comes with other very sharp edges. Compaction is one of those, and all answers are likely workload dependent.
If you are worried about write amplification then leveled compactions are sub-optimal. I would try the universal compaction.
- https://github.com/facebook/rocksdb/wiki/Universal-Compactio...
-
What are the advantages of using Rust to develop KV databases?
It's fairly challenging to write a KV database, and takes several years of development to get the balance right between performance and reliability and avoiding data loss. Maybe read through the documentation for RocksDB https://github.com/facebook/rocksdb/wiki/RocksDB-Overview and watch the video on why it was developed and that may give you an impression of what is involved.
-
We’re the Meilisearch team! To celebrate v1.0 of our open-source search engine, Ask us Anything!
LMDB is much more sain in the sense that it supports real ACID transactions instead of savepoints for RocksDB. The latter is heavy and consumes a lot more memory for a lot less read throughput. However, RocksDB has a much better parallel and concurrent write story, where you can merge entries with merge functions and therefore write from multiple CPUs.
-
Google's OSS-Fuzz expands fuzz-reward program to $30000
https://github.com/facebook/rocksdb/issues?q=is%3Aissue+clic...
Here are some bugs in JeMalloc:
-
Event streaming in .Net with Kafka
Streamiz wrap a consumer, a producer, and execute the topology for each record consumed in the source topic. You can easily create stateless and stateful application. By default, each state store is a RocksDb state store persisted on disk.
- Is there a lightweight, stable and embedded database library?
- Lines of code to rewrite the 600'000 lines RocksDB into a coroutine program
-
Meilisearch just announced its $15M Serie A, the search Rust engine strikes again
LMDB is much more same in the sense that it supports real ACID transaction instead of savepoints for RocksDB. The latter is heavy and consumes a lot more memory for a lot less read throughput. However, RocksDB has a much better parallel and concurrent write story, where you can merge entries with merge functions and therefore write from multiple CPUs.
-
Hey Rustaceans! Got a question? Ask here! (37/2022)!
My problem is that both ceph and the rust crate in question utilize the rocksdb store (in rust I use this one) and when I try compiling the project I get multiple definition errors since both the C++ rocksdb and Rust rocksdb are exposing the same functions.
-
Complete guide to open source licenses for developers
The keyword License or COPYING must be placed at the beginning of the file name, for example, License.BSD, License_MIT. An excellent example of how to organize multiple licenses - RocksDB
What are some alternatives?
zstd - Zstandard - Fast real-time compression algorithm
Snappy - A fast compressor/decompressor
brotli - Brotli compression format
LZMA - (Unofficial) Git mirror of LZMA SDK releases
LMDB - Read-only mirror of official repo on openldap.org. Issues and pull requests here are ignored. Use OpenLDAP ITS for issues.
LevelDB - LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.
ZLib - A massively spiffy yet delicately unobtrusive compression library.
SQLite - Unofficial git mirror of SQLite sources (see link for build instructions)
sled - the champagne of beta embedded databases
ClickHouse - ClickHouse® is a free analytics DBMS for big data
TileDB - The Universal Storage Engine
LZFSE - LZFSE compression library and command line tool