dsq
zsv
Our great sponsors
dsq | zsv | |
---|---|---|
20 | 25 | |
3,634 | 170 | |
4.4% | - | |
4.3 | 7.4 | |
7 months ago | 10 days ago | |
Go | C | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dsq
-
Tracking SQLite Database Changes in Git
You might want to look at tsv-utils, or a similar project: https://github.com/eBay/tsv-utils
For the SQL part, but maybe a lot heavier, you can use one of the projects listed on this page: https://github.com/multiprocessio/dsq (No longer maintained, but has links to lots of other projects)
-
DuckDB: Querying JSON files as if they were tables
Welcome to the gang! :)
https://github.com/multiprocessio/dsq#comparisons
- Ask HN: Programs that saved you 100 hours? (2022 edition)
-
Command-line data analytics made easy
SPyQL is really cool and its design is very smart, with it being able to leverage normal Python functions!
As far as similar tools go, I recommend taking a look at DataFusion[0], dsq[1], and OctoSQL[2].
DataFusion is a very (very very) fast command-line SQL engine but with limited support for data formats.
dsq is based on SQLite which means it has to load data into SQLite first, but then gives you the whole breath of SQLite, it also supports many data formats, but is slower at the same time.
OctoSQL is faster, extensible through plugins, and supports incremental query execution, so you can i.e. calculate a running group by + count while tailing a log file. It also supports normal databases, not just file formats, so you can i.e. join with a Postgres table.
[0]: https://github.com/apache/arrow-datafusion
[1]: https://github.com/multiprocessio/dsq
[2]: https://github.com/cube2222/octosql
Disclaimer: Author of OctoSQL
-
Jq Internals: Backtracking
> dsq registers go-sqlite3-stdlib so you get access to numerous statistics, url, math, string, and regexp functions that aren't part of the SQLite base. (https://github.com/multiprocessio/dsq#standard-library)
Ah, I wondered if they rolled their own SQL parser, but no, I now see the sqlite.go in the repo and all is made clear
-
Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
I am currently evaluating dsq and its partner desktop app DataStation. AIUI, the developer of DataStation realised that it would be useful to extract the underlying pieces into a standalone CLI, so they both support the same range of sources.
dsq CLI - https://github.com/multiprocessio/dsq
- multiprocessio / dsq :
-
OctoSQL allows you to join data from different sources using SQL
OctoSQL is an awesome project and Kuba has a lot of great experience to share from building this project I'm excited to learn from.
And while building a custom database engine does allow you to do pretty quick queries, there are a few issues.
First, the SQL implemented is nonstandard. As I was looking for documentation and it pointed me to `SELECT * FROM docs.functions fs`. I tried to count the number of functions but octosql crashed (a Go panic) when I ran `SELECT count(1) FROM docs.functions fs` and `SELECT count() FROM docs.functions fs` which is what I lazily do in standard SQL databases. (`SELECT count(fs.name) FROM docs.function fs` worked.)
This kind of thing will keep happening because this project just doesn't have as much resources today as SQLite, Postgres, DuckDB, etc. It will support a limited subset of SQL.
Second, the standard library seems pretty small. When I counted the builtin functions there were only 29. Now this is an easy thing to rectify over time but just noting about the state today.
And third this project only has builtin support for querying CSV and JSON files. Again this could be easy to rectify over time but just mentioning the state today.
octosql is a great project but there are also different ways to do the same thing.
I build dsq [0] which runs all queries through SQLite so it avoids point 1. It has access to SQLite's standard builtin functions plus* a battery of extra statistic aggregation, string manipulation, url manipulation, date manipulation, hashing, and math functions custom built to help this kind of interactive querying developers commonly do [1].
And dsq supports not just CSV and JSON but parquet, excel, ODS, ORC, YAML, TSV, and Apache and nginx logs.
A downside to dsq is that it is slower for large files (say over 10GB) when you only want a few columns whereas octosql does better in some of those cases. I'm hoping to improve this over time by adding a SQL filtering frontend to dsq but in all cases dsq will ultimately use SQLite as the query engine.
You can find more info about similar projects in octosql's Benchmark section but I also have a comparison section in dsq [2] and an extension of the octosql benchmark with different set of tools [3] including duckdb.
Everyone should check out duckdb. :)
[0] https://github.com/multiprocessio/dsq
[1] https://github.com/multiprocessio/go-sqlite3-stdlib
[2] https://github.com/multiprocessio/dsq#comparisons
[3] https://github.com/multiprocessio/dsq#benchmark
-
GitHub Actions are down again
What's annoying about this is that the PR doesn't even say it's trying to run tests. It says everything is passing and just doesn't list the actions.
For a second I thought someone must have deleted the actions yaml files.
This is a dangerous failure mode.
https://github.com/multiprocessio/dsq/pull/82
-
Xlite: Query Excel, Open Document spreadsheets (.ods) as SQLite virtual tables
This is a cool project! But if you query Excel and ODS files with dsq you get the same thing plus a growing standard library of functions that don't come built into SQLite such as best-effort date parsing, URL parsing/extraction, statistical aggregation functions, math functions, string and regex helpers, hashing functions and so on [1].
[0] https://github.com/multiprocessio/dsq
[1] https://github.com/multiprocessio/go-sqlite3-stdlib
zsv
-
Analyzing multi-gigabyte JSON files locally
If it could be tabular in nature, maybe convert to sqlite3 so you can make use of indexing, or CSV to make use of high-performance tools like xsv or zsv (the latter of which I'm an author).
https://github.com/BurntSushi/xsv
https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql...
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
Parsing CSV doesn't have to be slow if you use something like xsv or zsv (https://github.com/liquidaty/zsv) (disclaimer: I'm an author). The speed of CSV parsers is fast enough that unless you are doing something ultra-trivial such as "count rows", your bottleneck will be elsewhere.
The benefits of CSV are:
- human readable
- does not need to be typed (sometimes, data in the raw such as date-formatted data is not amenable to typing without introducing a pre-processing layer that gets you further from the original data)
- accessible to anyone: you don't need to be a data person to dbl-click and open in Excel or similar
The main drawback is that if your data is already typed, CSV does not communicate what the type is. You can alleviate this through various approaches such as is described at https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql..., though I wouldn't disagree that if you can be assured that your starting data conforms to non-text data types, there are probably better formats than CSV.
The main benefit of Arrow, IMHO, is less as a format for transmitting / communicating but rather as a format for data at rest, that would benefit from having higher performance column-based read and compression
- Yq is a portable yq: command-line YAML, JSON, XML, CSV and properties processor
-
csvkit: Command-line tools for working with CSV
I wanted so much to use csvkit and all the features it had, but its horrendous performance made it unscalable and therefore the more I used it, the more technical debt I accumulated.
This was one of the reasons I wrote zsv (https://github.com/liquidaty/zsv). Maybe csvkit could incorporate the zsv engine and we could get the best of both worlds?
Examples (using majestic million csv):
---
- Ask HN: Programs that saved you 100 hours? (2022 edition)
-
Show HN: Split CSV into multiple files to avoid the Excel's 1M row limitation
}
```
This of course assumes that each line is a single record, so you'll need some preprocessing if your CSV might contain embedded line-ends. For the preprocessing, you can use something like the `2tsv` command of https://github.com/liquidaty/zsv (disclaimer: I'm its author), which converts CSV to TSV and replaces newline with \n.
You can also use something like `xsv split` (see https://lib.rs/crates/xsv) which frankly is probably your best option as of today (though zsv will be getting its own shard command soon)
- Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
-
Ask HN: Best way to find help creating technical doc (open- or closed-source)?
Am looking for one-time help creating documentation (e.g. man pages, tutorials) for open source project (e.g. https://github.com/liquidaty/zsv) as well as product documentation for commercial products, but not enough need for a full-time job. Requires familiarity with, for lack of better term, data janitorial work, and preferably with methods of auto-generating documentation. Any suggestions as to forums or other ways to find folks who might fit the bill for ad-hoc or part-time work of this nature?
-
Q – Run SQL Directly on CSV or TSV Files
Nice work. I am a fan of tools like this and look forward to giving this a try.
However, in my first attempted query (version 3.1.6 on MacOS), I ran into significant performance limitations and more importantly, it did not give correct output.
In particular, running on a narrow table with 1mm rows (the same one used in the xsv examples) using the command "select country, count() from worldcitiespop_mil.csv group by country" takes 12 seconds just to get an incorrect error 'no such column: country'.
using sqlite3, it takes two seconds or so to load, and less than a second to run, and gives me the correct result.
Using https://github.com/liquidaty/zsv (disclaimer, I'm one of its authors), I get the correct results in 0.95 seconds with the one-liner `zsv sql 'select country, count() from data group by country' worldcitiespop_mil.csv`.
I look forward to trying it again sometime soon
-
A Trillion Prices
All this banter arguing over CSV, JSON, sqlite seems unnecessary when you can just push format X through a pipe and get whichever format Y you want back out: https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql...
(disclaimer: I'm one of the zsv authors)
What are some alternatives?
go-duckdb - go-duckdb provides a database/sql driver for the DuckDB database engine.
visidata - A terminal spreadsheet multitool for discovering and arranging data
q - q - Run SQL directly on delimited files and multi-file sqlite databases
duckdb - DuckDB is an in-process SQL OLAP Database Management System
querycsv - QueryCSV enables you to load CSV files and manipulate them using SQL queries then after you finish you can export the new values to a CSV file
lnav - Log file navigator
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
tsv-utils - eBay's TSV Utilities: Command line tools for large, tabular data files. Filtering, statistics, sampling, joins and more.
xlite - Query Excel spredsheets (.xlsx, .xls, .ods) using SQLite
ClickHouse - ClickHouse® is a free analytics DBMS for big data
textql - Execute SQL against structured text like CSV or TSV
nio - Low Overhead Numerical/Native IO library & tools