zsv VS nio

Compare zsv vs nio and see what are their differences.

zsv

zsv+lib: tabular data swiss-army knife CLI + world's fastest (simd) CSV parser (by liquidaty)

nio

Low Overhead Numerical/Native IO library & tools (by c-blake)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
zsv nio
25 7
171 32
- -
7.5 6.3
17 days ago 11 days ago
C Nim
MIT License ISC License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

zsv

Posts with mentions or reviews of zsv. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-18.
  • Analyzing multi-gigabyte JSON files locally
    14 projects | news.ycombinator.com | 18 Mar 2023
    If it could be tabular in nature, maybe convert to sqlite3 so you can make use of indexing, or CSV to make use of high-performance tools like xsv or zsv (the latter of which I'm an author).

    https://github.com/BurntSushi/xsv

    https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql...

  • Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
    20 projects | news.ycombinator.com | 6 Mar 2023
    Parsing CSV doesn't have to be slow if you use something like xsv or zsv (https://github.com/liquidaty/zsv) (disclaimer: I'm an author). The speed of CSV parsers is fast enough that unless you are doing something ultra-trivial such as "count rows", your bottleneck will be elsewhere.

    The benefits of CSV are:

    - human readable

    - does not need to be typed (sometimes, data in the raw such as date-formatted data is not amenable to typing without introducing a pre-processing layer that gets you further from the original data)

    - accessible to anyone: you don't need to be a data person to dbl-click and open in Excel or similar

    The main drawback is that if your data is already typed, CSV does not communicate what the type is. You can alleviate this through various approaches such as is described at https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql..., though I wouldn't disagree that if you can be assured that your starting data conforms to non-text data types, there are probably better formats than CSV.

    The main benefit of Arrow, IMHO, is less as a format for transmitting / communicating but rather as a format for data at rest, that would benefit from having higher performance column-based read and compression

  • Yq is a portable yq: command-line YAML, JSON, XML, CSV and properties processor
    11 projects | news.ycombinator.com | 4 Feb 2023
  • csvkit: Command-line tools for working with CSV
    1 project | news.ycombinator.com | 20 Jan 2023
    I wanted so much to use csvkit and all the features it had, but its horrendous performance made it unscalable and therefore the more I used it, the more technical debt I accumulated.

    This was one of the reasons I wrote zsv (https://github.com/liquidaty/zsv). Maybe csvkit could incorporate the zsv engine and we could get the best of both worlds?

    Examples (using majestic million csv):

    ---

  • Ask HN: Programs that saved you 100 hours? (2022 edition)
    69 projects | news.ycombinator.com | 20 Dec 2022
  • Show HN: Split CSV into multiple files to avoid the Excel's 1M row limitation
    2 projects | news.ycombinator.com | 17 Oct 2022
    }

    ```

    This of course assumes that each line is a single record, so you'll need some preprocessing if your CSV might contain embedded line-ends. For the preprocessing, you can use something like the `2tsv` command of https://github.com/liquidaty/zsv (disclaimer: I'm its author), which converts CSV to TSV and replaces newline with \n.

    You can also use something like `xsv split` (see https://lib.rs/crates/xsv) which frankly is probably your best option as of today (though zsv will be getting its own shard command soon)

  • Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
    9 projects | news.ycombinator.com | 24 Sep 2022
  • Ask HN: Best way to find help creating technical doc (open- or closed-source)?
    1 project | news.ycombinator.com | 23 Sep 2022
    Am looking for one-time help creating documentation (e.g. man pages, tutorials) for open source project (e.g. https://github.com/liquidaty/zsv) as well as product documentation for commercial products, but not enough need for a full-time job. Requires familiarity with, for lack of better term, data janitorial work, and preferably with methods of auto-generating documentation. Any suggestions as to forums or other ways to find folks who might fit the bill for ad-hoc or part-time work of this nature?
  • Q – Run SQL Directly on CSV or TSV Files
    13 projects | news.ycombinator.com | 21 Sep 2022
    Nice work. I am a fan of tools like this and look forward to giving this a try.

    However, in my first attempted query (version 3.1.6 on MacOS), I ran into significant performance limitations and more importantly, it did not give correct output.

    In particular, running on a narrow table with 1mm rows (the same one used in the xsv examples) using the command "select country, count() from worldcitiespop_mil.csv group by country" takes 12 seconds just to get an incorrect error 'no such column: country'.

    using sqlite3, it takes two seconds or so to load, and less than a second to run, and gives me the correct result.

    Using https://github.com/liquidaty/zsv (disclaimer, I'm one of its authors), I get the correct results in 0.95 seconds with the one-liner `zsv sql 'select country, count() from data group by country' worldcitiespop_mil.csv`.

    I look forward to trying it again sometime soon

  • A Trillion Prices
    5 projects | news.ycombinator.com | 6 Sep 2022
    All this banter arguing over CSV, JSON, sqlite seems unnecessary when you can just push format X through a pipe and get whichever format Y you want back out: https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql...

    (disclaimer: I'm one of the zsv authors)

nio

Posts with mentions or reviews of nio. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-21.
  • GNU Parallel, where have you been all my life?
    19 projects | news.ycombinator.com | 21 Aug 2023
    Sure. No problem.

    Even Windows has popen these days. There are some tiny popenr/popenw wrappers in https://github.com/c-blake/cligen/blob/master/cligen/osUt.ni...

    Depending upon how balanced work is on either side of the pipe, you usually can even get parallel speed-up on multicore with almost no work. For example, there is no need to use quote-escaped CSV parsing libraries when you just read from a popen()d translator program producing an easier format: https://github.com/c-blake/nio/blob/main/utils/c2tsv.nim

  • Offensive Nim
    8 projects | news.ycombinator.com | 1 Jan 2023
    I think it's a pretty nice prog.lang. You may be very happy. Though nothing is perfect, there is much to recommend it. By now I've written over 150 command-line tools with https://github.com/c-blake/cligen . A few are at https://github.com/c-blake/bu or https://github.com/c-blake/nio (screw 1970s COBOL-esque SQL) or in their own repos.

    If it helps, I like to use the "mob branch" [0] of TinyCC/tcc [1] for really fast builds in debugging mode, but this may only work if you toss `@if tcc: mm:markAndSweep @end` or similar in your nim.cfg. Then I have a little `@if r: ...` so I can say `nim c -d:r foo` for a release build with gcc/whatever.

    [0] https://repo.or.cz/w/tinycc.git

    [1] https://en.wikipedia.org/wiki/Tiny_C_Compiler

  • Modernizing AWK, a 45-year old language, by adding CSV support
    11 projects | news.ycombinator.com | 12 May 2022
    When you have "format wars", the best idea is usually to have a converter program change to the easiest to work with format - unless this incurs a massive expansion in space as per some image/video formats.

    With CSV-like data, bulk conversion from quoted-escaped RFC4180 CSV to a simpler-to-parse format is the best plan for several reasons. First, it may "catch on", help Microsoft/R/whoever embrace the format and in doing so squash many bugs written by "data analyst/scientist coders". Second, in a shell a|b run programs a & b in parallel on multi-core and allow things like csv2x|head -n10000|b. Third, bulk conversion to a random access file where literal delimiters cannot occur as non-delimiters allows trivial file segmentation to be nCores times faster (under often satisfied assumptions). There are some D tools for this in https://github.com/eBay/tsv-utils and a much smaller stand-alone Nim tool https://github.com/c-blake/nio/blob/main/utils/c2tsv.nim . Optional quoting was always going to be a PITA due to its non-locality. What if there is no quote anywhere? Fourth, by using a program as the unit of modularity in this case, you make things programming language agnostic. Someone could go to town and write a pure SIMD/AVX512 converter in assembly even and solve the problem "once and for all" on a given CPU. The problem is actually just simple enough that this smells possible.

    I am unaware of any "document" that "standardizes" this escaped/lossless TSV format. { Maybe call it "DSV" for delimiter separated values where "delimiters actually separate"? } Someone want to write an RFC or point to one? It can be just as "general/lossless" (see https://news.ycombinator.com/item?id=31352170).

    Of course, if you are going to do a lot of data processing against some data, it is even better to parse all the way to down to binary so that you never have to parse again (Well, unless you call CPUs loading registers "parsing") which is what database systems have been doing since the 1960s.

  • Unix command line conventions over time
    9 projects | news.ycombinator.com | 7 May 2022
    With https://github.com/c-blake/nio/blob/main/utils/catz.nim you can get similar format agnostic decoding/decompression not just in tar but in any pipeline context (based on magic numbers, not filename extensions and even doing the copy loop needed for unseekable inputs to replace the early read).
  • Show HN: Prig – like Awk, but uses Go for “scripting”
    3 projects | news.ycombinator.com | 1 Mar 2022
    Part of the value prop is that you can directly author/access any library code in your native prog.lang in your query as naturally as native imports and routine calls. Awk is pretty established and for trivial one-liners you may not need any libs, but there are a lot more Go/Python/etc. libs than awk libs, AFAICT.

    Beyond lib existence there is lib API understanding. There is not only language learning to be saved, but also lib ecosystem learning. I'm sure 'perl -na' folks could rattle off dozens of examples with CPAN package PDQ. I know many people who would say learning the ecosystem is what takes the most time...

    There is also little barrier to using the same approach to also "compile" your data as well as the code. This is basically how 'nio qry' works as a kind of open architecture ghetto DB. [1] I mean, yeah, Big Iron, corporate database XYZ probably also has some ugly, non-portable extension language with 1970s prog.lang aesthetics that also sacrifices much value of SQL standardization.

    [1] https://github.com/c-blake/nio

  • Fast CSV Processing with SIMD
    5 projects | news.ycombinator.com | 4 Dec 2021
    I get ~50% the speed of the article's variant with no SIMD at all in https://github.com/c-blake/nio/blob/main/utils/c2tsv.nim

    While in Nim, the main logic is really just bout 1 screenful for me and should not be so hard to follow.

    As commented elsewhere, but bearing repeating, a better approach is to bulk convert text to binary and then operate off of that. One feature you get is fixed sized rows and thus random access to rows without an index. You can even mmap the file and cast it to a struct pointer if you like (though you need the struct pointer to be the right type). When DBs or DB-ish file formats are faster, being in binary is the 0th order reason why.

    The main reason not to do this is if you have no disk space for it.

What are some alternatives?

When comparing zsv and nio you can also consider the following projects:

visidata - A terminal spreadsheet multitool for discovering and arranging data

jbang - Unleash the power of Java - JBang Lets Students, Educators and Professional Developers create, edit and run self-contained source-only Java programs with unprecedented ease.

duckdb - DuckDB is an in-process SQL OLAP Database Management System

procs - Unix process&system query&format lib&multi-command CLI in Nim

lnav - Log file navigator

tsv-utils - eBay's TSV Utilities: Command line tools for large, tabular data files. Filtering, statistics, sampling, joins and more.

DataProfiler - What's in your data? Extract schema, statistics and entities from datasets

ClickHouse - ClickHouse® is a free analytics DBMS for big data

goawk - A POSIX-compliant AWK interpreter written in Go, with CSV support

q - q - Run SQL directly on delimited files and multi-file sqlite databases

csvquote