zsv
dsq
zsv | dsq | |
---|---|---|
27 | 20 | |
230 | 3,836 | |
1.7% | 1.1% | |
9.1 | 4.3 | |
2 days ago | almost 2 years ago | |
C | Go | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zsv
-
How fast can you parse a CSV file in C#?
Haven't yet seen any of these beat https://github.com/liquidaty/zsv when real-world constraints are applied (e.g. we no longer assume that line ends are always \n, or that there are no dbl-quote chars, embedded commas/newlines/dbl-quotes). And maybe under the artificial conditions as well.
-
CSVs Are Kinda Bad. DSVs Are Kinda Good
I cannot imagine any way it is worth anyone's time to follow this article's suggestion vs just using something like zsv (https://github.com/liquidaty/zsv, which I'm an author of) or xsv (https://github.com/BurntSushi/xsv/edit/master/README.md) and then spending that time saved on "real" work
-
Analyzing multi-gigabyte JSON files locally
If it could be tabular in nature, maybe convert to sqlite3 so you can make use of indexing, or CSV to make use of high-performance tools like xsv or zsv (the latter of which I'm an author).
https://github.com/BurntSushi/xsv
https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql...
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
Parsing CSV doesn't have to be slow if you use something like xsv or zsv (https://github.com/liquidaty/zsv) (disclaimer: I'm an author). The speed of CSV parsers is fast enough that unless you are doing something ultra-trivial such as "count rows", your bottleneck will be elsewhere.
The benefits of CSV are:
- human readable
- does not need to be typed (sometimes, data in the raw such as date-formatted data is not amenable to typing without introducing a pre-processing layer that gets you further from the original data)
- accessible to anyone: you don't need to be a data person to dbl-click and open in Excel or similar
The main drawback is that if your data is already typed, CSV does not communicate what the type is. You can alleviate this through various approaches such as is described at https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql..., though I wouldn't disagree that if you can be assured that your starting data conforms to non-text data types, there are probably better formats than CSV.
The main benefit of Arrow, IMHO, is less as a format for transmitting / communicating but rather as a format for data at rest, that would benefit from having higher performance column-based read and compression
- Yq is a portable yq: command-line YAML, JSON, XML, CSV and properties processor
-
csvkit: Command-line tools for working with CSV
I wanted so much to use csvkit and all the features it had, but its horrendous performance made it unscalable and therefore the more I used it, the more technical debt I accumulated.
This was one of the reasons I wrote zsv (https://github.com/liquidaty/zsv). Maybe csvkit could incorporate the zsv engine and we could get the best of both worlds?
Examples (using majestic million csv):
---
- Ask HN: Programs that saved you 100 hours? (2022 edition)
-
Show HN: Split CSV into multiple files to avoid the Excel's 1M row limitation
}
```
This of course assumes that each line is a single record, so you'll need some preprocessing if your CSV might contain embedded line-ends. For the preprocessing, you can use something like the `2tsv` command of https://github.com/liquidaty/zsv (disclaimer: I'm its author), which converts CSV to TSV and replaces newline with \n.
You can also use something like `xsv split` (see https://lib.rs/crates/xsv) which frankly is probably your best option as of today (though zsv will be getting its own shard command soon)
- Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
-
Ask HN: Best way to find help creating technical doc (open- or closed-source)?
Am looking for one-time help creating documentation (e.g. man pages, tutorials) for open source project (e.g. https://github.com/liquidaty/zsv) as well as product documentation for commercial products, but not enough need for a full-time job. Requires familiarity with, for lack of better term, data janitorial work, and preferably with methods of auto-generating documentation. Any suggestions as to forums or other ways to find folks who might fit the bill for ad-hoc or part-time work of this nature?
dsq
-
Tracking SQLite Database Changes in Git
You might want to look at tsv-utils, or a similar project: https://github.com/eBay/tsv-utils
For the SQL part, but maybe a lot heavier, you can use one of the projects listed on this page: https://github.com/multiprocessio/dsq (No longer maintained, but has links to lots of other projects)
-
DuckDB: Querying JSON files as if they were tables
Welcome to the gang! :)
https://github.com/multiprocessio/dsq#comparisons
- Ask HN: Programs that saved you 100 hours? (2022 edition)
-
Command-line data analytics made easy
SPyQL is really cool and its design is very smart, with it being able to leverage normal Python functions!
As far as similar tools go, I recommend taking a look at DataFusion[0], dsq[1], and OctoSQL[2].
DataFusion is a very (very very) fast command-line SQL engine but with limited support for data formats.
dsq is based on SQLite which means it has to load data into SQLite first, but then gives you the whole breath of SQLite, it also supports many data formats, but is slower at the same time.
OctoSQL is faster, extensible through plugins, and supports incremental query execution, so you can i.e. calculate a running group by + count while tailing a log file. It also supports normal databases, not just file formats, so you can i.e. join with a Postgres table.
[0]: https://github.com/apache/arrow-datafusion
[1]: https://github.com/multiprocessio/dsq
[2]: https://github.com/cube2222/octosql
Disclaimer: Author of OctoSQL
-
Jq Internals: Backtracking
> dsq registers go-sqlite3-stdlib so you get access to numerous statistics, url, math, string, and regexp functions that aren't part of the SQLite base. (https://github.com/multiprocessio/dsq#standard-library)
Ah, I wondered if they rolled their own SQL parser, but no, I now see the sqlite.go in the repo and all is made clear
-
Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
I am currently evaluating dsq and its partner desktop app DataStation. AIUI, the developer of DataStation realised that it would be useful to extract the underlying pieces into a standalone CLI, so they both support the same range of sources.
dsq CLI - https://github.com/multiprocessio/dsq
- multiprocessio / dsq :
- OctoSQL allows you to join data from different sources using SQL
-
GitHub Actions are down again
What's annoying about this is that the PR doesn't even say it's trying to run tests. It says everything is passing and just doesn't list the actions.
For a second I thought someone must have deleted the actions yaml files.
This is a dangerous failure mode.
https://github.com/multiprocessio/dsq/pull/82
-
Xlite: Query Excel, Open Document spreadsheets (.ods) as SQLite virtual tables
This is a cool project! But if you query Excel and ODS files with dsq you get the same thing plus a growing standard library of functions that don't come built into SQLite such as best-effort date parsing, URL parsing/extraction, statistical aggregation functions, math functions, string and regex helpers, hashing functions and so on [1].
[0] https://github.com/multiprocessio/dsq
[1] https://github.com/multiprocessio/go-sqlite3-stdlib
What are some alternatives?
tsv-utils - eBay's TSV Utilities: Command line tools for large, tabular data files. Filtering, statistics, sampling, joins and more.
go-duckdb - go-duckdb provides a database/sql driver for the DuckDB database engine.
ClickHouse - ClickHouse® is a real-time analytics database management system
jless - jless is a command-line JSON viewer designed for reading, exploring, and searching through JSON data.
nebula - A distributed block-based data storage and compute engine
textql - Execute SQL against structured text like CSV or TSV