cstore_fdw
geoparquet
Our great sponsors
cstore_fdw | geoparquet | |
---|---|---|
6 | 3 | |
1,738 | 719 | |
0.4% | 5.0% | |
2.6 | 5.5 | |
about 3 years ago | 5 days ago | |
C | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cstore_fdw
-
Moving a Billion Postgres Rows on a $100 Budget
Columnar store PostgreSQL extension exists, here are two but I think I’m missing at least another one:
https://github.com/citusdata/cstore_fdw
https://github.com/hydradatabase/hydra
You can also connect other stores using the foreign data wrappers, like parquet files stored on an object store, duckdb, clickhouse… though the joins aren’t optimised as PostgreSQL would do full scan on the external table when joining.
-
Anything can be a message queue if you use it wrongly enough
I'm definitely not from Citus data -- just a pg zealot fighting the culture war.
If you want to reach people who can actually help, you probably want to check this link:
https://github.com/citusdata/cstore_fdw/issues
-
Pg_squeeze: An extension to fix table bloat
That appears to be the case:
https://github.com/citusdata/cstore_fdw
>Important notice: Columnar storage is now part of Citus
-
Ingesting an S3 file into an RDS PostgreSQL table
either we go for RDS, but we stick to the AWS handpicked extensions (exit timescale, citus or their columnar storage, ... ),
-
Postgres and Parquet in the Data Lke
Re: performance overhead, with FDWs we have to re-munge the data into PostgreSQL's internal row-oriented TupleSlot format again. Postgres also doesn't run aggregations that can take advantage of the columnar format (e.g. CPU vectorization). Citus had some experimental code to get that working [2], but that was before FDWs supported aggregation pushdown. Nowadays it might be possible to basically have an FDW that hooks into the GROUP BY execution and runs a faster version of the aggregation that's optimized for columnar storage. We have a blog post series [3] about how we added agg pushdown support to Multicorn -- similar idea.
There's also DuckDB which obliterates both of these options when it comes to performance. In my (again limited, not very scientific) benchmarking of on a customer's 3M row table [4] (278MB in cstore_fdw, 140MB in Parquet), I see a 10-20x (1/2s -> 0.1/0.2s) speedup on some basic aggregation queries when querying a Parquet file with DuckDB as opposed to using cstore_fdw/parquet_fdw.
I think the dream is being able to use DuckDB from within a FDW as an OLAP query engine for PostgreSQL. duckdb_fdw [5] exists, but it basically took sqlite_fdw and connected it to DuckDB's SQLite interface, which means that a lot of operations get lost in translation and aren't pushed down to DuckDB, so it's not much better than plain parquet_fdw.
This comment is already getting too long, but FDWs can indeed participate in partitions! There's this blog post that I keep meaning to implement where the setup is, a "coordinator" PG instance has a partitioned table, where each partition is a postgres_fdw foreign table that proxies to a "data" PG instance. The "coordinator" node doesn't store any data and only gathers execution results from the "data" nodes. In the article, the "data" nodes store plain old PG tables, but I don't think there's anything preventing them from being parquet_fdw/cstore_fdw tables instead.
[0] https://github.com/citusdata/cstore_fdw
-
Creating a simple data pipeline
The citus extension for postgresql. https://github.com/citusdata/cstore_fdw
geoparquet
-
Friends don't let friends export to CSV
That's why I'm working on the GeoParquet spec [0]! It gives you both compression-by-default and super fast reads and writes! So it's usually as small as gzipped CSV, if not smaller, while being faster to read and write than GeoPackage.
Try using `GeoDataFrame.to_parquet` and `GeoPandas.read_parquet`
[0]: https://github.com/opengeospatial/geoparquet
-
COMTiles (Cloud Optimized Map Tiles) hosted on Amazon S3 and Visualized with MapLibre GL JS
GeoParquet
-
Postgres and Parquet in the Data Lke
> "Generating Parquet"
It is also useful for moving data from Postgres to BigQuery! ( batch load )
https://cloud.google.com/bigquery/docs/loading-data-cloud-st...
Thanks for the "ogr2ogr" trick! :-)
I hope the next blog post will be about GeoParquet and storing complex geometries in parquet format :-)
https://github.com/opengeospatial/geoparquet
What are some alternatives?
ZLib - A massively spiffy yet delicately unobtrusive compression library.
mbtiles-spec - specification documents for the MBTiles tileset format
odbc2parquet - A command line tool to query an ODBC data source and write the result into a parquet file.
zstd - Zstandard - Fast real-time compression algorithm
geemap - A Python package for interactive geospatial analysis and visualization with Google Earth Engine.
cute_headers - Collection of cross-platform one-file C/C++ libraries with no dependencies, primarily used for games
flatgeobuf - A performant binary encoding for geographic data based on flatbuffers
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
postgres_vectorization_test - Vectorized executor to speed up PostgreSQL
parquet_fdw - Parquet foreign data wrapper for PostgreSQL
BlenderGIS - Blender addons to make the bridge between Blender and geographic data