Snappy
zstd
Snappy | zstd | |
---|---|---|
5 | 116 | |
6,249 | 24,258 | |
0.5% | 1.0% | |
6.2 | 9.8 | |
6 months ago | 3 days ago | |
C++ | C | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Snappy
-
Why I enjoy using the Nim programming language at Reddit.
Another example of Nim being really fast is the supersnappy library. This library benchmarks faster than Google’s C or C++ Snappy implementation.
-
Stretch iPhone to Its Limit: 2GiB Stable Diffusion Model Runs Locally on Device
It doesn't destroy performance for the simple reason that nowadays memory access has higher latency than pure compute. If you need to use compute to produce some data to be stored in memory, your overall throughput could very well be faster than without compression.
There have been a large amount of innovation on fast compression in recent years. Traditional compression tools like gzip or xz are geared towards higher compression ratio, but memory compression tends to favor speed. Check out those algorithms:
* lz4: https://lz4.github.io/lz4/
* Google's snappy: https://github.com/google/snappy
* Facebook's zstd in fast mode: http://facebook.github.io/zstd/#benchmarks
-
Compression with best ratio and fast decompression
Google released Snappy, which is extremely fast and robust (both at compression and decompression), but it's definitely not nearly as good (in terms of compression ratio). Google mostly uses it for real-time compression, for example of network messages - not for long-term storage.
-
How to store item info?
Just compress it! Of course if you will you ZIP, players will able to just open this zip file and change whatever they want. But you can use less popular compression algorithms which are not supported by default Windows File Explorer. Snappy for example.
- What's the best way to compress strings?
zstd
-
DeepSeek releases Janus Pro, a text-to-image generator [pdf]
This. Even their less known work is pretty solid[1] ( used it the other day and was frankly kinda amazed at how well it performed under the circumstances ). Facebook/Meta sucks like most social madia does, but, not unlike Elon Musk, they are on the record of having some contributions to society as a whole.
[1]https://github.com/facebook/zstd
-
New standards for a faster and more private Internet
I don't think so? It's only seekable with an additional index [1], just like any other compression scheme.
[1] https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...
-
Large Text Compression Benchmark
- latest zstd v1.5.6 ( Mar 30, 2024 https://github.com/facebook/zstd/releases )
-
Current problems and mistakes of web scraping in Python and tricks to solve them!
You may have also noticed that a new supported data compression format zstd appeared some time ago. I haven't seen any backends that use it yet, but httpx will support decompression in versions above 0.28.0. I already use it to compress server response dumps in my projects; it shows incredible efficiency in asynchronous solutions with aiofiles.
-
MLow: Meta's low bitrate audio codec
Zstd is a personal project? Surely it's not by accident in the Facebook GitHub organization? And that you need to sign a contract on code.facebook.com before they'll consider merging any contributions? That seems like an odd claim, unless it used to be a personal project and Facebook took it over
(https://github.com/facebook/zstd/blob/dev/CONTRIBUTING.md#co...)
-
My First Arch Linux Installation
Unmount root and remount the subvolumes and the boot partition. noatime is used for better performance zstd as file compression:
-
Rethinking string encoding: a 37.5% space efficient encoding than UTF-8 in Fury
> In such cases, the serialized binary are mostly in 200~1000 bytes. Not big enough for zstd to work
You're not referring to the same dictionary that I am. Look at --train in [1].
If you have a training corpus of representative data, you can generate a dictionary that you preshare on both sides which will perform much better for very small binaries (including 200-1k bytes).
If you want maximum flexibility (i.e. you don't know the universe of representative messages ahead of time or you want maximum compression performance), you can gather this corpus transparently as messages are generated & then generate a dictionary & attach it as sideband metadata to a message. You'll probably need to defer the decoding if it references a dictionary not yet received (i.e. send delivers messages out-of-order from generation). There are other techniques you can apply, but the general rule is that your custom encoding scheme is unlikely to outperform zstd + a representative training corpus. If it does, you'd need to actually show this rather than try to argue from first principles.
[1] https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md
-
Drink Me: (Ab)Using a LLM to Compress Text
> Doesn't take large amount of GPU resources
This is an understatement, zstd dictionary compression and decompression are blazingly fast: https://github.com/facebook/zstd/blob/dev/README.md#the-case...
My real-world use case for this was JSON files in a particular schema, and the results were fantastic.
-
SQLite VFS for ZSTD seekable format
This VFS will read a sqlite file after it has been compressed using [zstd seekable format](https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...). Built to support read-only databases for full-text search. Benchmarks are provided in README.
-
Chrome Feature: ZSTD Content-Encoding
Of course, you may get different results with another dataset.
gzip (zlib -6) [ratio=32%] [compr=35Mo/s] [dec=407Mo/s]
zstd (zstd -2) [ratio=32%] [compr=356Mo/s] [dec=1067Mo/s]
NB1: The default for zstd is -3, but the table only had -2. The difference is probably small. The range is 1-22 for zstd and 1-9 for gzip.
NB2: The default program for gzip (at least with Debian) is the executable from zlib. With my workflows, libdeflate-gzip iscompatible and noticably faster.
NB3: This benchmark is 2 years old. The latest releases of zstd are much better, see https://github.com/facebook/zstd/releases
For a high compression, according to this benchmark xz can do slightly better, if you're willing to pay a 10× penalty on decompression.
xz -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]
zstd -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]
What are some alternatives?
ZLib - A massively spiffy yet delicately unobtrusive compression library.
haproxy - HAProxy Load Balancer's development branch (mirror of git.haproxy.org)
LZ4 - Extremely Fast Compression algorithm
brotli - Brotli compression format
LZMA - (Unofficial) Git mirror of LZMA SDK releases
zlib-ng - zlib replacement with optimizations for "next generation" systems.
tiny_jpeg.h - Single header lib for JPEG encoding. Public domain. C99. stb style.
LZFSE - LZFSE compression library and command line tool