SaaSHub helps you find the best software and product alternatives Learn more →
Top 23 Zlib Open-Source Projects
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
lizard
Lizard (formerly LZ5) is an efficient compressor with very fast decompression. It achieves compression ratio that is comparable to zip/zlib and zstd/brotli (at low and medium compression levels) at decompression speed of 1000 MB/s and faster. (by inikep)
-
uzlib
Radically unbloated DEFLATE/zlib/gzip compression/decompression library. Can decompress any gzip/zlib data, and offers simplified compressor which produces gzip-compatible output, while requiring much less resources (and providing less compression ratio of course).
-
ESP32-targz
🗜️ An Arduino library to unpack/uncompress tar, gz, and tar.gz files on ESP32 and ESP8266
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Project mention: Show HN: Pzip- blazing fast concurrent zip archiver and extractor | news.ycombinator.com | 2023-09-24Please note that allowing for 2% bigger resulting file could mean huge speedup in these circumstances even with the same compression routines, seeing these benchmarks of zlib and zlib-ng for different compression levels:
https://github.com/zlib-ng/zlib-ng/discussions/871
IMO the fair comparison of the real speed improvement brought by a new program is only between the almost identical resulting compressed sizes.
IIRC they can be extracted using innoextract: https://github.com/dscharrer/innoextract
For a benchmark on a standard set: https://github.com/inikep/lzbench/blob/master/lzbench18_sort...
You should look at Precomp - an apparently abandoned, but awesomely working arhiver that is dedicated to compressing those ZIP-based formats, and compresses them twice-thrice as small as LZMA2 can.
For the last few hours I am unsuccessfully trying to use libzippp -> https://github.com/ctabin/libzippp/releases/tag/libzippp-v6.1-1.9.2
It works fine in the latest stable release of Darktable. You just need the definition for the camera in cameras.xml. Support for the A6700 was already added in the development branch but there hasn't been a new stable release yet since. Fortunately, simply downloading and replacing the file (which you can get from the dev branch on GitHub) suffices.
Project mention: Zstd Content-Encoding planned to ship with Chrome 123 | news.ycombinator.com | 2024-02-07I'm still unconvinced about this addition. And I don't even dislike Zstandard.
The main motivation seems to be that while Zstandard is worse than Brotli at the highest level, it's substantially faster than Brotli when data has to be compressed on the fly with a limited computation budget. That might be true, but I'm yet to see any concrete or even anecdotal evidence even in the issue tracker [1] while there exist some benchmarks where both Zstandard and Brotli are fast enough for the web usage even at lower levels [2].
According to their FAQ [3] Meta and Akamai have successfully used Zstandard in their internal network, but my gut feeling is that they never actually tried to optimize Brotli instead. In fact, Meta employs the main author of Zstandard so it would have been easier to tune Zstandard instead of Brotli. While Brotli has some fundamental difference from Zstandard (in particular Brotli doesn't use arithmetic-equivalent coding), no one has concretely demonstrated that difference would prevent Brotli from being fast enough for dynamic contents in my opinion.
[1] https://issues.chromium.org/issues/40196713
[2] https://github.com/powturbo/TurboBench/issues/43
[3] https://docs.google.com/document/d/14dbzMpsYPfkefAJos124uPrl...
This GitHub repo might have something that works https://github.com/pfalcon/uzlib , according to the author, the compression ratio isn't very high
Yes, this is a problem for books with long titles and subtitles. I use kobopatch to increase the width available for book titles in the booklist https://www.mobileread.com/forums/showpost.php?p=4239375&postcount=844. Here are pics of some patches that I use.
Project mention: Leaking Bitwarden's Vault with a Nginx vulnerability | news.ycombinator.com | 2023-07-03
Zlib related posts
- Zstd Content-Encoding planned to ship with Chrome 123
- Support for a6700?
- Show HN: Pzip- blazing fast concurrent zip archiver and extractor
- TurboBench: Dynamic/Static web content compression benchmark
- Intel QuickAssist Technology Zstandard Plugin for Zstandard
- TurboBench: Dynamic/Static web content compression benchmark
- TurboBench: Dynamic/Static web content compression benchmark
-
A note from our sponsor - SaaSHub
www.saashub.com | 25 Apr 2024
Index
What are some of the best open-source Zlib projects? This list will help you:
Project | Stars | |
---|---|---|
1 | pako | 5,295 |
2 | fflate | 2,051 |
3 | zlib-ng | 1,440 |
4 | Minizip-ng | 1,160 |
5 | innoextract | 893 |
6 | lzbench | 841 |
7 | flate2-rs | 825 |
8 | ObEngine | 787 |
9 | lizard | 633 |
10 | precomp-cpp | 390 |
11 | FractalCryptGUI | 379 |
12 | libzippp | 358 |
13 | rawspeed | 336 |
14 | matio | 324 |
15 | TurboBench | 310 |
16 | uzlib | 295 |
17 | zippy | 234 |
18 | kobopatch-patches | 201 |
19 | tinf | 142 |
20 | merecat | 138 |
21 | fast_zlib | 132 |
22 | ESP32-targz | 114 |
23 | libz-sys | 103 |
Sponsored