Our great sponsors
LZ4 | Snappy | |
---|---|---|
21 | 5 | |
8,313 | 5,659 | |
1.5% | 0.7% | |
8.6 | 2.1 | |
7 days ago | about 1 month ago | |
C | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LZ4
-
Rsyncing 20TB locally
According to these https://github.com/lz4/lz4 values you need around ten (10) quite modern cores in parallel to accomplish around 8GB/s.
- Cerbios Xbox Bios V2.2.0 BETA Released (1.0 - 1.6)
-
zstd
> The downside of lz4 is that it can’t be configured to run at higher & slower compression ratios.
lz4 has some level of configurability? https://github.com/lz4/lz4/blob/v1.9.4/lib/lz4frame.h#L194
There's also LZ4_HC.
-
I'm new to this
Get your bootloader unlocked via Download mode and then obtain your stock firmware, preferably for your current region https://samfw.com (Download mode: CARRIER_CODE). Get the boot image from AP with 7zip, unpack from LZ4 with https://github.com/lz4/lz4/releases (drag and drop), patch with Magisk https://github.com/topjohnwu/magisk/releases/latest, grab the new image, name it "boot.img" and pack it into a .tar with 7zip and flash to AP with odin https://odindownload.com
-
An efficient image format for SDL
After some investigations and experiments, I found out that it was the PNG compression (well, decompression I should say) that took a while. So I've made some experiments using the LZ4 compression library, which is focused on decompression speed, and it turned out to be an excellent solution!
-
Bzip3 – a better and stronger spiritual successor to bzip2
If anyone just cares for speed instead of compression I’d recommend lz4 [1]. I only recently started using it. Its speed is almost comparable to memcpy.
-
I just took a random screenshot and made it look prettier. [ I don't know if this counts as fanart ]
E: Realtime compression (A good compression library like Zstandard can make a game less than half the size while taking a tiny amount of CPU power when loading stuff. I think thats a pretty worthwhile trade.) (ZSTD github) (LZ4 github)
-
What's the best way to compress strings?
lz4 for maximum decompression speed, for data that is often read but rarely written
-
How to become a tools/graphics/engine programmer
Getting lost in material models is tempting. But, at this point you are overdue for working on your own asset pipeline. glTF is great. But, you should learn how to do it yourself. The hardest part will be reading source asset files. The FBX SDK is painful. Assimp isn't great either. Writing your own exporter to your own intermediate text format from Maya or Blender would be good if you are up for it. From whatever source, make your own archive format and binary formats for meshes, animations, textures and scenes. Use https://github.com/lz4/lz4 for compression. You should be able to decompress a list of assets into a big linear array and use them right there with just a bit of pointer fix-up. Minimize the amount of memory you have to touch from start to finish. Data that is going to the GPU (textures, vertex/index buffers) should decompress straight into mapped buffers for fast uploads.
-
LZ4, an Extremely Fast Compression Algorithm
I'm not a fan of the stacked bar charts, I like the table of data for "Benchmarks" on the github source page: https://github.com/lz4/lz4
It makes it very clear where LZ4 fits into comparisons with compression speed, decompression speed and compression ratio
Snappy
-
Why I enjoy using the Nim programming language at Reddit.
Another example of Nim being really fast is the supersnappy library. This library benchmarks faster than Google’s C or C++ Snappy implementation.
-
Stretch iPhone to Its Limit: 2GiB Stable Diffusion Model Runs Locally on Device
It doesn't destroy performance for the simple reason that nowadays memory access has higher latency than pure compute. If you need to use compute to produce some data to be stored in memory, your overall throughput could very well be faster than without compression.
There have been a large amount of innovation on fast compression in recent years. Traditional compression tools like gzip or xz are geared towards higher compression ratio, but memory compression tends to favor speed. Check out those algorithms:
* lz4: https://lz4.github.io/lz4/
* Google's snappy: https://github.com/google/snappy
* Facebook's zstd in fast mode: http://facebook.github.io/zstd/#benchmarks
- What's the best way to compress strings?
What are some alternatives?
zstd - Zstandard - Fast real-time compression algorithm
brotli - Brotli compression format
LZMA - (Unofficial) Git mirror of LZMA SDK releases
ZLib - A massively spiffy yet delicately unobtrusive compression library.
LZFSE - LZFSE compression library and command line tool
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard
LZHAM - Lossless data compression codec with LZMA-like ratios but 1.5x-8x faster decompression speed, C/C++
RocksDB - A library that provides an embeddable, persistent key-value store for fast storage.