sqlite-parquet-vtable
csvq
sqlite-parquet-vtable | csvq | |
---|---|---|
4 | 14 | |
261 | 1,450 | |
- | - | |
10.0 | 2.7 | |
almost 3 years ago | 5 months ago | |
C++ | Go | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sqlite-parquet-vtable
-
Universal Database
Sqlite3 has parquet extension that support parquet (https://github.com/cldellow/sqlite-parquet-vtable) as virtual table. I use sqlite3 a lot, for work and personally. It's really good, but I do have issue with large datasets, mainly due VACUUM operations. Insertion rate drops significantly when single table hits around 20M rows. Indexing is important for your query speed, but it'll impact your write speed.
-
Show HN: Easily Convert WARC (Web Archive) into Parquet, Then Query with DuckDB
Well there's a virtual table extension to read parquet files in SQLite. I've not tried it myself. https://github.com/cldellow/sqlite-parquet-vtable
-
One-liner for running queries against CSV files with SQLite
/? sqlite arrow
- "Comparing SQLite, DuckDB and Arrow with UN trade data" https://news.ycombinator.com/item?id=29010103 ; partial benchmarks of query time and RAM requirements [relative to data size] would be
- https://arrow.apache.org/blog/2022/02/16/introducing-arrow-f... :
> Motivation: While standards like JDBC and ODBC have served users well for decades, they fall short for databases and clients which wish to use Apache Arrow or columnar data in general. Row-based APIs like JDBC or PEP 249 require transposing data in this case, and for a database which is itself columnar, this means that data has to be transposed twice—once to present it in rows for the API, and once to get it back into columns for the consumer. Meanwhile, while APIs like ODBC do provide bulk access to result buffers, this data must still be copied into Arrow arrays for use with the broader Arrow ecosystem, as implemented by projects like Turbodbc. Flight SQL aims to get rid of these intermediate steps.
> - One cannot create a trigger on a virtual table.
Just posted about eBPF a few days ago; opcodes have costs that are or are not costed: https://news.ycombinator.com/item?id=31688180
> - One cannot create additional indices on a virtual table. (Virtual tables can have indices but that must be built into the virtual table implementation. Indices cannot be added separately using CREATE INDEX statements.)
It looks like e.g. sqlite-parquet-vtable implements shadow tables to memoize row group filters. How does JOIN performance vary amongst sqlite virtual table implementations?
> - One cannot run ALTER TABLE ... ADD COLUMN commands against a virtual table.
Are there URIs in the schema? Mustn't there thus be a meta-schema that does e.g. nested structs with portable types [with URIs], (and jsonschema, [and W3C SHACL])?
/? sqlite arrow virtual table
- sqlite-parquet-vtable reads parquet with arrow for SQLite virtual tables https://github.com/cldellow/sqlite-parquet-vtable :
$ sqlite/sqlite3
-
Show HN: WarcDB: Web crawl data as SQLite databases
https://github.com/cldellow/sqlite-parquet-vtable
But for my use case virtual would be too complicated.
csvq
-
Fx – Terminal JSON Viewer
sure can do, if you already use that shell [1], but personally I like specific tools for specific jobs such as jq [2], fx, csvq [3] etc, there's value in decoupling shells from utils (modularity, speed, innovation etc).
[1] I don't but tempted to try, like its data-types concept
[2] https://jqlang.github.io/jq/
[3] https://github.com/mithrandie/csvq
-
Tool to interact with CSV
csvq
-
Can SQL be used without an RDBMS?
There is a way of running SQL-like queries against CSV files.
-
Yq is a portable yq: command-line YAML, JSON, XML, CSV and properties processor
Lately I have had to do a lot of flat file analysis and tools along these lines have been a godsend. Will check this out.
My go to lately has been csvq (https://mithrandie.github.io/csvq/). Really nice to be able run complicated selects right over a CSV file with no setup at all.
-
Wie fusioniert man CSV tables?
csvq (https://mithrandie.github.io/csvq/)
-
Tool to explore big data sets
I usually do this with awk, my largest target files being half a TB in size for a project last year (and far too large to hold entirely in RAM). There are some other utilities like csvq and csvsql both of which let you write SQL-style queries against CSV files, but I'm not sure how they perform on large files. There's a nice list of CSV manipulation tools too if any of those jog your memory.
-
sqly - execute SQL against CSV / JSON with shell
Apparently, there were many who thought the same thing; Tools to execute SQL against CSV were trdsql, q, csvq, TextQL. They were highly functional, hoewver, had many options and no input completion. I found it just a little difficult to use.
- One-liner for running queries against CSV files with SQLite
-
Most efficient way to query .CSV files for Mac?
Please check out this tool https://github.com/mithrandie/csvq
-
Looking for: library to turn SQL (or abstracted) to code & execute against custom backend (slice of structs)
If you are looking to query nondb data with sql statements then you may want to check something like https://github.com/mithrandie/csvq (SQL for csv).
What are some alternatives?
duckdb - DuckDB is an in-process SQL OLAP Database Management System
querycsv - QueryCSV enables you to load CSV files and manipulate them using SQL queries then after you finish you can export the new values to a CSV file
WarcDB - WarcDB: Web crawl data as SQLite databases.
q - q - Run SQL directly on delimited files and multi-file sqlite databases
zsv - zsv+lib: tabular data swiss-army knife CLI + world's fastest (simd) CSV parser
yq - yq is a portable command-line YAML, JSON, XML, CSV, TOML and properties processor
sqlite_protobuf - A SQLite extension for extracting values from serialized Protobuf messages
yq - Command-line YAML, XML, TOML processor - jq wrapper for YAML/XML/TOML documents
visidata - A terminal spreadsheet multitool for discovering and arranging data
miller - Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON
datasette - An open source multi-tool for exploring and publishing data