One-liner for running queries against CSV files with SQLite

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • zsv

    zsv+lib: world's fastest (simd) CSV parser, bare metal or wasm, with an extensible CLI for SQL querying, format conversion and more

  • https://github.com/liquidaty/zsv/blob/main/app/external/sqli... modifies the sqlite3 virtual table engine to use the faster zsv parser. have not quantified the difference, but in all tests I have run, `zsv sql` runs faster (sometimes much faster) than other sqlite3-on-CSV solutions mentioned in this entire discussion (unless you include those that cache their indexes and then measure against a post-cached query). Disclaimer: I'm the main zsv author

  • tidy-viewer

    📺(tv) Tidy Viewer is a cross-platform CLI csv pretty printer that uses column styling to maximize viewer enjoyment.

  • I am a data scientists. I have used a lot of tools/libraries to interact with data. SQLite is my favorite. It is hard to beat the syntax/grammar.

    Also, when I use SQLite I do not output using column mode. I pipe to `tv` (tidy-viewer) to get a pretty output.

    https://github.com/alexhallam/tv

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • octosql

    OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.

  • Error: couldn't parse query: invalid argument syntax error at position 13 near 'index'

    Hope this helps and I wish you all the best.

    [1] https://github.com/cube2222/octosql/wiki/Function-Documentat...

    One of my all-time favorite (and somehow still-obscure / relatively unknown) tools is called `lnav` ^1. It's a mini-ETL powertool with embedded SQLite, perfect for wrangling log files or other semi-structured data (a few millions of rows are no problem), it's intuitive and flexible...

    1. https://lnav.org

  • csvq

    SQL-like query language for csv

  • q

    q - Run SQL directly on delimited files and multi-file sqlite databases (by harelba)

  • sqlite_protobuf

    A SQLite extension for extracting values from serialized Protobuf messages

  • SQLite's virtual table API (https://www.sqlite.org/vtab.html) makes it possible to access other data structures through the query engine. You don't need to know much if anything about how the database engine executes queries, you only need to implement the callbacks it needs to do its job. A few years ago I wrote an extension to let me search through serialized Protobufs which were stored as blobs in a regular database.

    https://github.com/rgov/sqlite_protobuf

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • sqlite-parquet-vtable

    A SQLite vtable extension to read Parquet files

  • /? sqlite arrow

    - "Comparing SQLite, DuckDB and Arrow with UN trade data" https://news.ycombinator.com/item?id=29010103 ; partial benchmarks of query time and RAM requirements [relative to data size] would be

    - https://arrow.apache.org/blog/2022/02/16/introducing-arrow-f... :

    > Motivation: While standards like JDBC and ODBC have served users well for decades, they fall short for databases and clients which wish to use Apache Arrow or columnar data in general. Row-based APIs like JDBC or PEP 249 require transposing data in this case, and for a database which is itself columnar, this means that data has to be transposed twice—once to present it in rows for the API, and once to get it back into columns for the consumer. Meanwhile, while APIs like ODBC do provide bulk access to result buffers, this data must still be copied into Arrow arrays for use with the broader Arrow ecosystem, as implemented by projects like Turbodbc. Flight SQL aims to get rid of these intermediate steps.

    > - One cannot create a trigger on a virtual table.

    Just posted about eBPF a few days ago; opcodes have costs that are or are not costed: https://news.ycombinator.com/item?id=31688180

    > - One cannot create additional indices on a virtual table. (Virtual tables can have indices but that must be built into the virtual table implementation. Indices cannot be added separately using CREATE INDEX statements.)

    It looks like e.g. sqlite-parquet-vtable implements shadow tables to memoize row group filters. How does JOIN performance vary amongst sqlite virtual table implementations?

    > - One cannot run ALTER TABLE ... ADD COLUMN commands against a virtual table.

    Are there URIs in the schema? Mustn't there thus be a meta-schema that does e.g. nested structs with portable types [with URIs], (and jsonschema, [and W3C SHACL])?

    /? sqlite arrow virtual table

    - sqlite-parquet-vtable reads parquet with arrow for SQLite virtual tables https://github.com/cldellow/sqlite-parquet-vtable :

      $ sqlite/sqlite3

  • xsv

    A fast CSV command line toolkit written in Rust.

  • qhs

    SQL queries on CSV and TSV files

  • duckdb

    DuckDB is an in-process SQL OLAP Database Management System

  • DuckDB will automatically infer you are reading a CSV file from the extension, then automatically infer column names from the header, together with various CSV properties (data types, delimiter, quote type, etc). You don't even need to quote the table name as long as the file is in your current directory.

    DuckDB uses the SQLite shell, so all of the commands that are mentioned in the article with SQLite will also work for DuckDB.

    [1] https://github.com/duckdb/duckdb

    Disclaimer: Developer of DuckDB

  • visidata

    A terminal spreadsheet multitool for discovering and arranging data

  • ClickHouse

    ClickHouse® is a free analytics DBMS for big data

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts