Our great sponsors
-
PostgreSQL
Mirror of the official PostgreSQL GIT repository. Note that this is just a *mirror* - we don't work with pull requests on github. To contribute, please see https://wiki.postgresql.org/wiki/Submitting_a_Patch
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
lzsa
Byte-aligned, efficient lossless packer that is optimized for fast decompression on 8-bit micros
in the dev PG15 LZ4 WAL compression is enabled
https://github.com/postgres/postgres/commit/4035cd5d4eee4dae...
"Add support for LZ4 with compression of full-page writes in WAL
With cloud network-based storage like EBS or pd-ssd, I doubt you will notice any IOPS-related perf penalty when running a RDBMS on top of ZFS. Assuming a mixed read/write workload, you will likely hit the disk write throughput limit first, as ZFS just writes ... a lot more, due to how blocks are stored internally: https://github.com/openzfs/zfs/issues/6584#issuecomment-3848...
On an innodb database, I get about 3x compress ratio with ZSTD compression, and ZFS still has to write about 2x more than EXT4.
chunk_group_row_limit: - the maximum number of rows per chunk for newly-inserted data. Existing chunks of data will not be changed and may have more rows than this maximum value. The default value is 10000."
read more: https://github.com/citusdata/citus/tree/master/src/backend/c...
True.
If you're on your way down this rabbit hole, there's a bunch of old-machine-specific compression algorithms, developed by the emulator community, e.g. LZSA: https://github.com/emmanuel-marty/lzsa