Snappy
xxHash
Our great sponsors
Snappy | xxHash | |
---|---|---|
5 | 28 | |
5,977 | 8,431 | |
0.6% | - | |
2.6 | 8.4 | |
6 days ago | 4 days ago | |
C++ | C | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Snappy
-
Why I enjoy using the Nim programming language at Reddit.
Another example of Nim being really fast is the supersnappy library. This library benchmarks faster than Google’s C or C++ Snappy implementation.
-
Stretch iPhone to Its Limit: 2GiB Stable Diffusion Model Runs Locally on Device
It doesn't destroy performance for the simple reason that nowadays memory access has higher latency than pure compute. If you need to use compute to produce some data to be stored in memory, your overall throughput could very well be faster than without compression.
There have been a large amount of innovation on fast compression in recent years. Traditional compression tools like gzip or xz are geared towards higher compression ratio, but memory compression tends to favor speed. Check out those algorithms:
* lz4: https://lz4.github.io/lz4/
* Google's snappy: https://github.com/google/snappy
* Facebook's zstd in fast mode: http://facebook.github.io/zstd/#benchmarks
-
Compression with best ratio and fast decompression
Google released Snappy, which is extremely fast and robust (both at compression and decompression), but it's definitely not nearly as good (in terms of compression ratio). Google mostly uses it for real-time compression, for example of network messages - not for long-term storage.
-
How to store item info?
Just compress it! Of course if you will you ZIP, players will able to just open this zip file and change whatever they want. But you can use less popular compression algorithms which are not supported by default Windows File Explorer. Snappy for example.
- What's the best way to compress strings?
xxHash
-
The One Billion Row Challenge in CUDA: from 17 minutes to 17 seconds
> GPU Hash Table?
How bad would performance have suffered if you sha256'd the lines to build the map? I'm going to guess "badly"?
Maybe something like this in CUDA: https://github.com/Cyan4973/xxHash ?
- ETag and HTTP Caching
-
Day 64: Implementing a basic Bloom Filter Using Java BitSet api
Examples of fast, simple hashes that are independent enough includes murmur, xxHash, Fowler–Noll–Vo hash function and many others
- Closed-addressing hashtables implementation
-
NIST Retires SHA-1 Cryptographic Algorithm
If you're only using the hash for non-cryptographic applications, there are much faster hashes: https://github.com/Cyan4973/xxHash
-
Does the checksum algorithm crc32c-intel support AMD Ryzen series 3000 or newer?
I found the benchmark result of AMD ryzen 5950X
-
[Study Project] A memory-optimized JSON data structure
But what's the catch, you're thinking ? Well, it is a bit slower than its counterparts when it comes to deserializing (and marginally faster for serializing). To achieve smaller footprint, it uses a few tricks and notably a custom hash table to deduplicate strings. This comes at a cost of course (even when featuring xxHash to speed things up), but keeps the slowdown reasonable (I think).
-
What do you typically use for non-cryptographic hash functions?
Non cryptographic hashes has collisions, for example, assume you having content like "abcdefg" which hashed value is "123", in case of weak hash algorithm some other content like "abcdefZ" can also have a hash "123" which basically means such hash function is failed to be unique fingerprint of particular content. BLAKE3 for example can do 6-7Gb/s which make it pretty fast and secure. If your requirement accepts collision with defined error rate, I would advise you to take a look at XXH3 if you need very snappy hash algorithm, which can run at pace or RAM access (30GB/s+), but again, run tests at particular equipment you targeting, may be AES hardware accelerated MeowHash will serve you better.
- C++ gonna die😥
- rsync, article 3: How does rsync work?
What are some alternatives?
zstd - Zstandard - Fast real-time compression algorithm
BLAKE3 - the official Rust and C implementations of the BLAKE3 cryptographic hash function
LZ4 - Extremely Fast Compression algorithm
meow_hash - Official version of the Meow hash, an extremely fast level 1 hash
brotli - Brotli compression format
xxh - 🚀 Bring your favorite shell wherever you go through the ssh. Xonsh shell, fish, zsh, osquery and so on.
ZLib - A massively spiffy yet delicately unobtrusive compression library.
blake3 - An AVX-512 accelerated implementation of the BLAKE3 cryptographic hash function
LZMA - (Unofficial) Git mirror of LZMA SDK releases
smhasher - Hash function quality and speed tests
zlib-ng - zlib replacement with optimizations for "next generation" systems.
swift-crypto - Open-source implementation of a substantial portion of the API of Apple CryptoKit suitable for use on Linux platforms.