cstore_fdw VS LZ4

Compare cstore_fdw vs LZ4 and see what are their differences.

cstore_fdw

Columnar storage extension for Postgres built as a foreign data wrapper. Check out https://github.com/citusdata/citus for a modernized columnar storage implementation built as a table access method. (by citusdata)

LZ4

Extremely Fast Compression algorithm (by lz4)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
cstore_fdw LZ4
6 21
1,738 9,208
0.4% 1.8%
2.6 9.5
about 3 years ago 5 days ago
C C
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cstore_fdw

Posts with mentions or reviews of cstore_fdw. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-21.
  • Moving a Billion Postgres Rows on a $100 Budget
    2 projects | news.ycombinator.com | 21 Feb 2024
    Columnar store PostgreSQL extension exists, here are two but I think I’m missing at least another one:

    https://github.com/citusdata/cstore_fdw

    https://github.com/hydradatabase/hydra

    You can also connect other stores using the foreign data wrappers, like parquet files stored on an object store, duckdb, clickhouse… though the joins aren’t optimised as PostgreSQL would do full scan on the external table when joining.

  • Anything can be a message queue if you use it wrongly enough
    6 projects | news.ycombinator.com | 4 Jun 2023
    I'm definitely not from Citus data -- just a pg zealot fighting the culture war.

    If you want to reach people who can actually help, you probably want to check this link:

    https://github.com/citusdata/cstore_fdw/issues

  • Pg_squeeze: An extension to fix table bloat
    3 projects | news.ycombinator.com | 4 Oct 2022
    That appears to be the case:

    https://github.com/citusdata/cstore_fdw

    >Important notice: Columnar storage is now part of Citus

  • Ingesting an S3 file into an RDS PostgreSQL table
    3 projects | dev.to | 10 Jun 2022
    either we go for RDS, but we stick to the AWS handpicked extensions (exit timescale, citus or their columnar storage, ... ),
  • Postgres and Parquet in the Data Lke
    7 projects | news.ycombinator.com | 3 May 2022
    Re: performance overhead, with FDWs we have to re-munge the data into PostgreSQL's internal row-oriented TupleSlot format again. Postgres also doesn't run aggregations that can take advantage of the columnar format (e.g. CPU vectorization). Citus had some experimental code to get that working [2], but that was before FDWs supported aggregation pushdown. Nowadays it might be possible to basically have an FDW that hooks into the GROUP BY execution and runs a faster version of the aggregation that's optimized for columnar storage. We have a blog post series [3] about how we added agg pushdown support to Multicorn -- similar idea.

    There's also DuckDB which obliterates both of these options when it comes to performance. In my (again limited, not very scientific) benchmarking of on a customer's 3M row table [4] (278MB in cstore_fdw, 140MB in Parquet), I see a 10-20x (1/2s -> 0.1/0.2s) speedup on some basic aggregation queries when querying a Parquet file with DuckDB as opposed to using cstore_fdw/parquet_fdw.

    I think the dream is being able to use DuckDB from within a FDW as an OLAP query engine for PostgreSQL. duckdb_fdw [5] exists, but it basically took sqlite_fdw and connected it to DuckDB's SQLite interface, which means that a lot of operations get lost in translation and aren't pushed down to DuckDB, so it's not much better than plain parquet_fdw.

    This comment is already getting too long, but FDWs can indeed participate in partitions! There's this blog post that I keep meaning to implement where the setup is, a "coordinator" PG instance has a partitioned table, where each partition is a postgres_fdw foreign table that proxies to a "data" PG instance. The "coordinator" node doesn't store any data and only gathers execution results from the "data" nodes. In the article, the "data" nodes store plain old PG tables, but I don't think there's anything preventing them from being parquet_fdw/cstore_fdw tables instead.

    [0] https://github.com/citusdata/cstore_fdw

  • Creating a simple data pipeline
    1 project | /r/dataengineering | 20 May 2021
    The citus extension for postgresql. https://github.com/citusdata/cstore_fdw

LZ4

Posts with mentions or reviews of LZ4. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-21.
  • Number sizes for LZ77 compression
    1 project | /r/compression | 30 Apr 2023
    LZ4 is a bit more complicated, but seems faster: https://github.com/lz4/lz4/blob/dev/doc/lz4_Block_format.md
  • Rsyncing 20TB locally
    2 projects | /r/zfs | 21 Mar 2023
    According to these https://github.com/lz4/lz4 values you need around ten (10) quite modern cores in parallel to accomplish around 8GB/s.
  • An Intro to Data Compression
    1 project | dev.to | 17 Feb 2023
    The popular NoSQL database Cassandra utilizes a compression algorithm called LZ4 to reduce the footprint of data at rest. LZ4 is characterized by very fast compression speed at the cost of a higher compression ratio. This is a design choice that allows Cassandra to maintain high write throughput while also benefiting from compression in some capacity.
  • Micron Unveils 24GB and 48GB DDR5 Memory Modules | AMD EXPO and Intel XMP 3.0 compatible
    1 project | /r/gadgets | 21 Jan 2023
    Yeah, sure, when you have monster core counts. on regular systems, not so much, here's from their own github page. it achieves, eh, 5GB/s on memory to memory transfers, i.e. best case scenario. so, uh, no? i'm not even sure it's any better than the CPU decompressor one Nvidia used.
  • Cerbios Xbox Bios V2.2.0 BETA Released (1.0 - 1.6)
    2 projects | /r/originalxbox | 31 Dec 2022
  • zstd
    8 projects | news.ycombinator.com | 19 Dec 2022
    > The downside of lz4 is that it can’t be configured to run at higher & slower compression ratios.

    lz4 has some level of configurability? https://github.com/lz4/lz4/blob/v1.9.4/lib/lz4frame.h#L194

    There's also LZ4_HC.

  • Best archival/compression format for whole hard drives
    1 project | /r/DataHoarder | 7 Dec 2022
    Since nobody mentioned it, I'll add lz4 (https://github.com/lz4/lz4).
  • I'm new to this
    2 projects | /r/androidroot | 28 Nov 2022
    Get your bootloader unlocked via Download mode and then obtain your stock firmware, preferably for your current region https://samfw.com (Download mode: CARRIER_CODE). Get the boot image from AP with 7zip, unpack from LZ4 with https://github.com/lz4/lz4/releases (drag and drop), patch with Magisk https://github.com/topjohnwu/magisk/releases/latest, grab the new image, name it "boot.img" and pack it into a .tar with 7zip and flash to AP with odin https://odindownload.com
  • An efficient image format for SDL
    4 projects | /r/gamedev | 28 Sep 2022
    After some investigations and experiments, I found out that it was the PNG compression (well, decompression I should say) that took a while. So I've made some experiments using the LZ4 compression library, which is focused on decompression speed, and it turned out to be an excellent solution!
  • how to root Samsung galaxy note 10 plus 5g(SM-N976B
    1 project | /r/androidroot | 21 Jul 2022
    Root with magisk: whether you use OneUI ≤3 or 4, patch the specific image needed for it (pre 4: boot, after 4: recovery) and flash it to the device. Boot it and enjoy root. https://github.com/lz4/lz4/releases can help extracting it from the AP tarball.

What are some alternatives?

When comparing cstore_fdw and LZ4 you can also consider the following projects:

ZLib - A massively spiffy yet delicately unobtrusive compression library.

zstd - Zstandard - Fast real-time compression algorithm

odbc2parquet - A command line tool to query an ODBC data source and write the result into a parquet file.

Snappy - A fast compressor/decompressor

brotli - Brotli compression format

cute_headers - Collection of cross-platform one-file C/C++ libraries with no dependencies, primarily used for games

LZMA - (Unofficial) Git mirror of LZMA SDK releases

delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs

parquet_fdw - Parquet foreign data wrapper for PostgreSQL

7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard