zsv VS DuckDB

Compare zsv vs DuckDB and see what are their differences.

zsv

zsv+lib: tabular data swiss-army knife CLI + world's fastest (simd) CSV parser (by liquidaty)

DuckDB

DuckDB is an analytical in-process SQL database management system (by duckdb)
InfluxDB – Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com
featured
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video.
Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
getstream.io
featured
zsv DuckDB
27 70
231 30,949
2.2% 5.2%
9.1 10.0
8 days ago 3 days ago
C C++
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

zsv

Posts with mentions or reviews of zsv. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-10-27.
  • How fast can you parse a CSV file in C#?
    4 projects | news.ycombinator.com | 27 Oct 2024
    Haven't yet seen any of these beat https://github.com/liquidaty/zsv when real-world constraints are applied (e.g. we no longer assume that line ends are always \n, or that there are no dbl-quote chars, embedded commas/newlines/dbl-quotes). And maybe under the artificial conditions as well.
  • CSVs Are Kinda Bad. DSVs Are Kinda Good
    2 projects | news.ycombinator.com | 14 Aug 2024
    I cannot imagine any way it is worth anyone's time to follow this article's suggestion vs just using something like zsv (https://github.com/liquidaty/zsv, which I'm an author of) or xsv (https://github.com/BurntSushi/xsv/edit/master/README.md) and then spending that time saved on "real" work
  • Analyzing multi-gigabyte JSON files locally
    14 projects | news.ycombinator.com | 18 Mar 2023
    If it could be tabular in nature, maybe convert to sqlite3 so you can make use of indexing, or CSV to make use of high-performance tools like xsv or zsv (the latter of which I'm an author).

    https://github.com/BurntSushi/xsv

    https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql...

  • Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
    20 projects | news.ycombinator.com | 6 Mar 2023
    Parsing CSV doesn't have to be slow if you use something like xsv or zsv (https://github.com/liquidaty/zsv) (disclaimer: I'm an author). The speed of CSV parsers is fast enough that unless you are doing something ultra-trivial such as "count rows", your bottleneck will be elsewhere.

    The benefits of CSV are:

    - human readable

    - does not need to be typed (sometimes, data in the raw such as date-formatted data is not amenable to typing without introducing a pre-processing layer that gets you further from the original data)

    - accessible to anyone: you don't need to be a data person to dbl-click and open in Excel or similar

    The main drawback is that if your data is already typed, CSV does not communicate what the type is. You can alleviate this through various approaches such as is described at https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql..., though I wouldn't disagree that if you can be assured that your starting data conforms to non-text data types, there are probably better formats than CSV.

    The main benefit of Arrow, IMHO, is less as a format for transmitting / communicating but rather as a format for data at rest, that would benefit from having higher performance column-based read and compression

  • Yq is a portable yq: command-line YAML, JSON, XML, CSV and properties processor
    11 projects | news.ycombinator.com | 4 Feb 2023
  • csvkit: Command-line tools for working with CSV
    1 project | news.ycombinator.com | 20 Jan 2023
    I wanted so much to use csvkit and all the features it had, but its horrendous performance made it unscalable and therefore the more I used it, the more technical debt I accumulated.

    This was one of the reasons I wrote zsv (https://github.com/liquidaty/zsv). Maybe csvkit could incorporate the zsv engine and we could get the best of both worlds?

    Examples (using majestic million csv):

    ---

  • Ask HN: Programs that saved you 100 hours? (2022 edition)
    69 projects | news.ycombinator.com | 20 Dec 2022
  • Show HN: Split CSV into multiple files to avoid the Excel's 1M row limitation
    2 projects | news.ycombinator.com | 17 Oct 2022
    }

    ```

    This of course assumes that each line is a single record, so you'll need some preprocessing if your CSV might contain embedded line-ends. For the preprocessing, you can use something like the `2tsv` command of https://github.com/liquidaty/zsv (disclaimer: I'm its author), which converts CSV to TSV and replaces newline with \n.

    You can also use something like `xsv split` (see https://lib.rs/crates/xsv) which frankly is probably your best option as of today (though zsv will be getting its own shard command soon)

  • Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
    9 projects | news.ycombinator.com | 24 Sep 2022
  • Ask HN: Best way to find help creating technical doc (open- or closed-source)?
    1 project | news.ycombinator.com | 23 Sep 2022
    Am looking for one-time help creating documentation (e.g. man pages, tutorials) for open source project (e.g. https://github.com/liquidaty/zsv) as well as product documentation for commercial products, but not enough need for a full-time job. Requires familiarity with, for lack of better term, data janitorial work, and preferably with methods of auto-generating documentation. Any suggestions as to forums or other ways to find folks who might fit the bill for ad-hoc or part-time work of this nature?

DuckDB

Posts with mentions or reviews of DuckDB. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2025-05-29.
  • ClickHouse raises $350M Series C
    7 projects | news.ycombinator.com | 29 May 2025
    Thanks for creating this issue, it is worth investigating!

    I see you also created similar issues in Polars: https://github.com/pola-rs/polars/issues/17932 and DuckDB: https://github.com/duckdb/duckdb/issues/17066

    ClickHouse has a built-in memory tracker, so even if there is not enough memory, it will stop the query and send an exception to the client, instead of crashing. It also allows fair sharing of memory between different workloads.

    You need to provide more info on the issue for reproduction, e.g., how to fill the tables. 16 GB of memory should be enough even for a CROSS JOIN between a 10 billion-row and a 100-row table, because it is processed in a streaming fashion without accumulating a large amount of data in memory. The same should be true for a merge join.

    However, there are places when a large buffer might be needed. For example, if you insert data into a table backed by S3 storage, it requires a buffer that can be in the order of 500 MB.

    There is a possibility that your machine has 16 GB of memory, but most of it is consumed by Chrome, Slack, or Safari, and not much is left for ClickHouse server.

  • ClickHouse gets lazier (and faster): Introducing lazy materialization
    5 projects | news.ycombinator.com | 22 Apr 2025
    It does, but the performance isn't great apparently: https://github.com/duckdb/duckdb/discussions/10161
  • DuckDB 1.2.2 Released
    1 project | news.ycombinator.com | 9 Apr 2025
  • The DuckDB Local UI
    21 projects | news.ycombinator.com | 12 Mar 2025
    I agree that the blog post seems to hint at the fact that this functionality is fully baked in in certain places - we've adjusted the blog post to be more explicit on the fact that this is an extension.

    We have collaborated with MotherDuck on streamlining the experience of launching the UI through auto-installation, but the DuckDB Foundation still remains in full control of DuckDB and the extension ecosystem. This has no impact on that.

    For further clarification:

    * The auto-installation mechanism is identical to that of other trusted extensions - the auto-installation is triggered when a specific function is called that does not exist in the catalog - in this case the `start_ui` function. See [1]. The query I mentioned just calls that function. The only special feature here is the addition of the CLI flag (and what that flag executes is user-configurable).

    * The HTTP server is necessary for the extension to function as the extension needs to communicate with the browser. The server is open-source as part of the extension code [2]. The server (1) fetches web resources (javascript/css) from ui.duckdb.org, and (2) communicates with localhost to co-ordinate the UI with DuckDB. Outside of these the server doesn't interface with other external web services.

    [1] https://github.com/duckdb/duckdb/blob/main/src/include/duckd...

  • Should You Ditch Spark for DuckDB or Polars?
    3 projects | news.ycombinator.com | 15 Dec 2024
  • Gah – CLI to install software from GitHub Releases
    8 projects | news.ycombinator.com | 11 Dec 2024
    1) https://github.com/duckdb/duckdb/releases/download/v1.1.3/duckdb_cli-linux- amd64.zip
  • Show HN: Trilogy – A Reusable, Composable SQL Experiment
    5 projects | news.ycombinator.com | 25 Nov 2024
    Any particular examples you have in mind? The demo is just referencing https://github.com/duckdb/duckdb/tree/main/extension/tpcds/d... which I wouldn't regard as a standard of good SQL; (implicit joins, yikes!) - but is a useful capability reference (as is tpc-ds in general).

    As I tried to convey, I like SQL a lot - my frustration is more around the lifecycle and maintainability.

    Happy to add more ergonomic references in other places, if you have some good examples to reference against?

  • SQL-92 in TPC Benchmarks: Are They Still Relevant?
    1 project | dev.to | 25 Oct 2024
    I was reading "pg_duckdb beta release: Even faster analytics in Postgres", which demonstrates that the execution of TPC-DS Query 01 is 1500 times faster on DuckDB compared to PostgreSQL. Naturally, I was curious to see how this query performs in YugabyteDB. However, when I examined the SQL query that was used, which repeatedly accesses the same table and conducts analytics without utilizing analytic functions, I wondered: should we be spending time, in 2024, examining queries from analytics benchmarks that were written on SQL-92 while ignoring the window functions introduced in SQL:2003?
  • DuckDB v1.1.2
    1 project | news.ycombinator.com | 14 Oct 2024
  • DuckDB 1.1.0 Released
    4 projects | news.ycombinator.com | 9 Sep 2024
    The last I read, the Spark API was to become the focus point.

    https://duckdb.org/docs/api/python/spark_api

    Not sure what the current status is.

    ref: <https://github.com/duckdb/duckdb/issues/2000#issuecomment-18...>

What are some alternatives?

When comparing zsv and DuckDB you can also consider the following projects:

tsv-utils - eBay's TSV Utilities: Command line tools for large, tabular data files. Filtering, statistics, sampling, joins and more.

ClickHouse - ClickHouse® is a real-time analytics database management system

TimescaleDB - A time-series database for high-performance real-time analytics packaged as a Postgres extension

nebula - A distributed block-based data storage and compute engine

octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.

InfluxDB – Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com
featured
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video.
Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
getstream.io
featured