SaaSHub helps you find the best software and product alternatives Learn more →
Similar projects and alternatives to zsv
A fast CSV command line toolkit written in Rust.
eBay's TSV Utilities: Command line tools for large, tabular data files. Filtering, statistics, sampling, joins and more.
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.
DuckDB is an in-process SQL OLAP Database Management System
A terminal spreadsheet multitool for discovering and arranging data
ClickHouse® is a free analytics DBMS for big data
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
perl5 module for composition and decomposition of comma-separated values
A distributed block-based data storage and compute engine (by varchar-io)
OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
AutoHotkey - macro-creation and automation-oriented scripting utility for Windows.
Use SQL to instantly query your cloud services (AWS, Azure, GCP and more). Open source CLI. No DB required.
SQL-like query language for csv
The Python programming language
:cherry_blossom: A command-line fuzzy finder
🐶 Kubernetes CLI To Manage Your Clusters In Style!
Parsing gigabytes of JSON per second
Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON
Select, put and delete data from JSON, TOML, YAML, XML and CSV files with a single tool. Supports conversion between formats and can be used as a Go package.
Commandline tool for running SQL queries against JSON, CSV, Excel, Parquet, and more.
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
zsv reviews and mentions
Analyzing multi-gigabyte JSON files locally
14 projects | news.ycombinator.com | 18 Mar 2023
If it could be tabular in nature, maybe convert to sqlite3 so you can make use of indexing, or CSV to make use of high-performance tools like xsv or zsv (the latter of which I'm an author).
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
20 projects | news.ycombinator.com | 6 Mar 2023
Parsing CSV doesn't have to be slow if you use something like xsv or zsv (https://github.com/liquidaty/zsv) (disclaimer: I'm an author). The speed of CSV parsers is fast enough that unless you are doing something ultra-trivial such as "count rows", your bottleneck will be elsewhere.
The benefits of CSV are:
- human readable
- does not need to be typed (sometimes, data in the raw such as date-formatted data is not amenable to typing without introducing a pre-processing layer that gets you further from the original data)
- accessible to anyone: you don't need to be a data person to dbl-click and open in Excel or similar
The main drawback is that if your data is already typed, CSV does not communicate what the type is. You can alleviate this through various approaches such as is described at https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql..., though I wouldn't disagree that if you can be assured that your starting data conforms to non-text data types, there are probably better formats than CSV.
The main benefit of Arrow, IMHO, is less as a format for transmitting / communicating but rather as a format for data at rest, that would benefit from having higher performance column-based read and compression
Yq is a portable yq: command-line YAML, JSON, XML, CSV and properties processor
11 projects | news.ycombinator.com | 4 Feb 2023
Ask HN: Programs that saved you 100 hours? (2022 edition)
69 projects | news.ycombinator.com | 20 Dec 2022
Show HN: Split CSV into multiple files to avoid the Excel's 1M row limitation
2 projects | news.ycombinator.com | 17 Oct 2022
This of course assumes that each line is a single record, so you'll need some preprocessing if your CSV might contain embedded line-ends. For the preprocessing, you can use something like the `2tsv` command of https://github.com/liquidaty/zsv (disclaimer: I'm its author), which converts CSV to TSV and replaces newline with \n.
You can also use something like `xsv split` (see https://lib.rs/crates/xsv) which frankly is probably your best option as of today (though zsv will be getting its own shard command soon)
Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
9 projects | news.ycombinator.com | 24 Sep 2022
Q – Run SQL Directly on CSV or TSV Files
13 projects | news.ycombinator.com | 21 Sep 2022
Nice work. I am a fan of tools like this and look forward to giving this a try.
However, in my first attempted query (version 3.1.6 on MacOS), I ran into significant performance limitations and more importantly, it did not give correct output.
In particular, running on a narrow table with 1mm rows (the same one used in the xsv examples) using the command "select country, count() from worldcitiespop_mil.csv group by country" takes 12 seconds just to get an incorrect error 'no such column: country'.
using sqlite3, it takes two seconds or so to load, and less than a second to run, and gives me the correct result.
Using https://github.com/liquidaty/zsv (disclaimer, I'm one of its authors), I get the correct results in 0.95 seconds with the one-liner `zsv sql 'select country, count() from data group by country' worldcitiespop_mil.csv`.
I look forward to trying it again sometime soon
A Trillion Prices
5 projects | news.ycombinator.com | 6 Sep 2022
All this banter arguing over CSV, JSON, sqlite seems unnecessary when you can just push format X through a pipe and get whichever format Y you want back out: https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql...
(disclaimer: I'm one of the zsv authors)
One-liner for running queries against CSV files with SQLite
20 projects | news.ycombinator.com | 21 Jun 2022
https://github.com/liquidaty/zsv/blob/main/app/external/sqli... modifies the sqlite3 virtual table engine to use the faster zsv parser. have not quantified the difference, but in all tests I have run, `zsv sql` runs faster (sometimes much faster) than other sqlite3-on-CSV solutions mentioned in this entire discussion (unless you include those that cache their indexes and then measure against a post-cached query). Disclaimer: I'm the main zsv author20 projects | news.ycombinator.com | 21 Jun 2022
It's nice that q has caching.... then again, it kind of needs it to solve its performance inefficiency.
Running "select *" on a 1mm-row worldcitiespop_mil file, q takes 27 seconds compared to `zsv sql` which takes 1.7 seconds ( https://github.com/liquidaty/zsv ) and also does . I'm sure q is faster once cached, but taking a 16x performance hit up-front is not for me
A note from our sponsor - #<SponsorshipServiceOld:0x00007f160cb9a248>
www.saashub.com | 30 Mar 2023
liquidaty/zsv is an open source project licensed under MIT License which is an OSI approved license.