Snappy VS xxHash

Compare Snappy vs xxHash and see what are their differences.

Snappy

A fast compressor/decompressor (by google)

xxHash

Extremely fast non-cryptographic hash algorithm (by Cyan4973)
Our great sponsors
  • SonarQube - Static code analysis for 29 languages.
  • Scout APM - Less time debugging, more time building
  • SaaSHub - Software Alternatives and Reviews
Snappy xxHash
2 17
5,203 6,209
1.2% -
4.4 8.8
9 days ago 10 days ago
C++ C
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Snappy

Posts with mentions or reviews of Snappy. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-07-29.

xxHash

Posts with mentions or reviews of xxHash. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-05-10.
  • A Simple Hash for Perlin Noise
    3 projects | news.ycombinator.com | 10 May 2022
    XxHash has some great benchmarks for various hash functions. FNV is still competitive for small inputs. Most hash functions are built to have high throughput for hashing hundreds of bytes or more. XxHash in particular has an explicit mode switch from "small data" to "big data" sizes around a couple hundred bytes (it varies by platform and compiler).

    It's hard to do both small and big data correctly, and FNV is one of the few that optimizes for small data.

    For moderately small data, larger than around 4 bytes, xxh3 beats it handily. But for extremely small sizes, FNV is still the winner.

    Honestly people should probably just try to switch to xxh3 if performance is a concern, but FNV is certainly competitive for integer-size keys.

    https://github.com/Cyan4973/xxHash/wiki/Performance-comparis...

  • Use Fast Data Algorithms
    5 projects | news.ycombinator.com | 6 May 2022
    Agree with everything you say except that the post didn't mention non-cryptographic hashing algos that can be driven that hard. xxHash[1] (and especially XXH3) is almost always the fastest hashing choice, as it both is fast and has wide language support.

    Sure there are some other fast ones out there like cityhash[2] but there aren't good Java/Python bindings I'm aware of and I wouldn't recommend using it in production given the lack of wide-spread use versus xxhash which is used by LZ4 internally and in databases all over the place.

    [1] https://github.com/Cyan4973/xxHash

  • How we built our tiered storage subsystem
    3 projects | dev.to | 20 Apr 2022
    Object stores shard workloads based on an object name prefix, so if all log segments have the same prefix, they will hit the same storage server. This will lead to throttling and limit the upload throughput. To ensure good upload performance, Redpanda inserts a randomized prefix into every object. The prefix is computed using a xxHash hash function.
  • BLAKE2: “Harder, Better, Faster, Stronger” Than MD5 (2014)
    1 project | news.ycombinator.com | 23 Jan 2022
    check out https://github.com/Cyan4973/xxHash for I use it for performance sensitive non-crypto stuff.
  • No clue how other people are hitting <200ms on Day 23 (C++)
    5 projects | reddit.com/r/adventofcode | 24 Dec 2021
    Also, choose a good hash function! A poor one can reduce hashtable performance dramatically, but especially with chaining hashtables since hashing all elements to a couple buckets mean massive linked lists to traverse. I recommend not rolling your own, use a good quality one like xxHash. xxHash also allows you to hash an entire block of bytes at one time.
  • XXH: Bring your favorite shell wherever you go through the SSH
    4 projects | news.ycombinator.com | 14 Dec 2021
  • Meow Hash
    13 projects | news.ycombinator.com | 29 Oct 2021
    The README for xxhash has benchmarks covering fast hashes including Meow:

    https://github.com/Cyan4973/xxHash/wiki/Performance-comparis...

    13 projects | news.ycombinator.com | 29 Oct 2021
  • Getting Started with Redis and RedisGraph
    13 projects | dev.to | 21 Oct 2021
    $ git clone https://github.com/RedisGraph/RedisGraph -b v2.4.11 --recurse-submodules -j8 Cloning into 'RedisGraph'... remote: Enumerating objects: 49063, done. remote: Counting objects: 100% (2906/2906), done. remote: Compressing objects: 100% (1082/1082), done. remote: Total 49063 (delta 1998), reused 2448 (delta 1736), pack-reused 46157 Receiving objects: 100% (49063/49063), 39.33 MiB | 114.00 KiB/s, done. Resolving deltas: 100% (38402/38402), done. Submodule 'deps/RediSearch' (https://github.com/RediSearch/RediSearch.git) registered for path 'deps/RediSearch' Submodule 'deps/googletest' (https://github.com/google/googletest.git) registered for path 'deps/googletest' Submodule 'deps/libcypher-parser' (https://github.com/RedisGraph/libcypher-parser.git) registered for path 'deps/libcypher-parser' Submodule 'deps/rax' (https://github.com/antirez/rax.git) registered for path 'deps/rax' Submodule 'deps/readies' (https://github.com/RedisLabsModules/readies.git) registered for path 'deps/readies' Submodule 'deps/xxHash' (https://github.com/Cyan4973/xxHash.git) registered for path 'deps/xxHash' Cloning to '/home/bpdp/master/postdoc-ugm/RedisGraph/deps/RediSearch'... remote: Enumerating objects: 34395, done. remote: Counting objects: 100% (1802/1802), done. remote: Compressing objects: 100% (1097/1097), done. remote: Total 34395 (delta 1150), reused 1182 (delta 696), pack-reused 32593 Receiving objects: 100% (34395/34395), 23.62 MiB | 71.00 KiB/s, done. Resolving deltas: 100% (25261/25261), done. Cloning to '/home/bpdp/master/postdoc-ugm/RedisGraph/deps/rax'... remote: Enumerating objects: 668, done. remote: Counting objects: 100% (25/25), done. remote: Compressing objects: 100% (14/14), done. remote: Total 668 (delta 12), reused 19 (delta 11), pack-reused 643 Receiving objects: 100% (668/668), 236.14 KiB | 1.41 MiB/s, done. Resolving deltas: 100% (414/414), done. Cloning to '/home/bpdp/master/postdoc-ugm/RedisGraph/deps/readies'... remote: Enumerating objects: 2354, done. remote: Counting objects: 100% (833/833), done. remote: Compressing objects: 100% (329/329), done. remote: Total 2354 (delta 608), reused 675 (delta 503), pack-reused 1521 Receiving objects: 100% (2354/2354), 390.69 KiB | 17.00 KiB/s, done. Resolving deltas: 100% (1577/1577), done. Cloning to '/home/bpdp/master/postdoc-ugm/RedisGraph/deps/libcypher-parser'... remote: Enumerating objects: 3250, done. remote: Counting objects: 100% (68/68), done. remote: Compressing objects: 100% (46/46), done. remote: Total 3250 (delta 42), reused 43 (delta 21), pack-reused 3182 Receiving objects: 100% (3250/3250), 2.10 MiB | 28.00 KiB/s, done. Resolving deltas: 100% (2488/2488), done. Cloning to '/home/bpdp/master/postdoc-ugm/RedisGraph/deps/xxHash'... remote: Enumerating objects: 4784, done. remote: Counting objects: 100% (345/345), done. remote: Compressing objects: 100% (188/188), done. remote: Total 4784 (delta 189), reused 255 (delta 143), pack-reused 4439 Receiving objects: 100% (4784/4784), 2.54 MiB | 27.00 KiB/s, done. Resolving deltas: 100% (2922/2922), done. Cloning to '/home/bpdp/master/postdoc-ugm/RedisGraph/deps/googletest'... remote: Enumerating objects: 23334, done. remote: Counting objects: 100% (234/234), done. remote: Compressing objects: 100% (142/142), done. remote: Total 23334 (delta 120), reused 146 (delta 81), pack-reused 23100 Receiving objects: 100% (23334/23334), 9.49 MiB | 44.00 KiB/s, done. Resolving deltas: 100% (17191/17191), done. Submodule path 'deps/RediSearch': checked out '68430b3c838374478dd9ffe4e361534f572b16ff' Submodule 'deps/googletest' (https://github.com/google/googletest.git) registered for path 'deps/RediSearch/deps/googletest' Submodule 'deps/readies' (https://github.com/RedisLabsModules/readies.git) registered for path 'deps/RediSearch/deps/readies' Cloning to '/home/bpdp/master/postdoc-ugm/RedisGraph/deps/RediSearch/deps/googletest'... remote: Enumerating objects: 23334, done. remote: Counting objects: 100% (234/234), done. remote: Compressing objects: 100% (148/148), done. remote: Total 23334 (delta 120), reused 141 (delta 75), pack-reused 23100 Receiving objects: 100% (23334/23334), 9.56 MiB | 1.05 MiB/s, done. Resolving deltas: 100% (17185/17185), done. Kloning ke '/home/bpdp/master/postdoc-ugm/RedisGraph/deps/RediSearch/deps/readies'... remote: Enumerating objects: 2354, done. remote: Counting objects: 100% (833/833), done. remote: Compressing objects: 100% (329/329), done. remote: Total 2354 (delta 608), reused 675 (delta 503), pack-reused 1521 Receiving objects: 100% (2354/2354), 390.69 KiB | 853.00 KiB/s, done. Resolving deltas: 100% (1577/1577), done. Submodule path 'deps/RediSearch/deps/googletest': checked out 'dea0216d0c6bc5e63cf5f6c8651cd268668032ec' Submodule path 'deps/RediSearch/deps/readies': checked out '89be267427c7dfcfaab4064942ef0f595f6b1fa3' Submodule path 'deps/googletest': checked out '565f1b848215b77c3732bca345fe76a0431d8b34' Submodule path 'deps/libcypher-parser': checked out '38cdee1867b18644616292c77fe2ac1f2b179537' Submodule path 'deps/rax': checked out 'ba4529f6c836c9ff1296cde12b8557329f5530b7' Submodule path 'deps/readies': checked out 'd59f3ad4e9b3d763eb41df07567111dc94c6ecac' Submodule path 'deps/xxHash': checked out '726c14000ca73886f6258a6998fb34dd567030e9' $
  • Better way to get a random 64 bit integer?
    1 project | reddit.com/r/C_Programming | 6 Aug 2021
    Hmm I got it from Yann Collet's github where he uses it as a kernel for Facebook's xxHash.

What are some alternatives?

When comparing Snappy and xxHash you can also consider the following projects:

zstd - Zstandard - Fast real-time compression algorithm

brotli - Brotli compression format

LZ4 - Extremely Fast Compression algorithm

BLAKE3 - the official Rust and C implementations of the BLAKE3 cryptographic hash function

ZLib - A massively spiffy yet delicately unobtrusive compression library.

LZMA - (Unofficial) Git mirror of LZMA SDK releases

meow_hash - Official version of the Meow hash, an extremely fast level 1 hash

swift-crypto - Open-source implementation of a substantial portion of the API of Apple CryptoKit suitable for use on Linux platforms.

zlib-ng - zlib replacement with optimizations for "next generation" systems.

Minizip-ng - Fork of the popular zip manipulation library found in the zlib distribution.