sqlite-parquet-vtable
octosql
sqlite-parquet-vtable | octosql | |
---|---|---|
4 | 34 | |
261 | 4,699 | |
- | - | |
10.0 | 1.2 | |
almost 3 years ago | 4 days ago | |
C++ | Go | |
Apache License 2.0 | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sqlite-parquet-vtable
-
Universal Database
Sqlite3 has parquet extension that support parquet (https://github.com/cldellow/sqlite-parquet-vtable) as virtual table. I use sqlite3 a lot, for work and personally. It's really good, but I do have issue with large datasets, mainly due VACUUM operations. Insertion rate drops significantly when single table hits around 20M rows. Indexing is important for your query speed, but it'll impact your write speed.
-
Show HN: Easily Convert WARC (Web Archive) into Parquet, Then Query with DuckDB
Well there's a virtual table extension to read parquet files in SQLite. I've not tried it myself. https://github.com/cldellow/sqlite-parquet-vtable
-
One-liner for running queries against CSV files with SQLite
/? sqlite arrow
- "Comparing SQLite, DuckDB and Arrow with UN trade data" https://news.ycombinator.com/item?id=29010103 ; partial benchmarks of query time and RAM requirements [relative to data size] would be
- https://arrow.apache.org/blog/2022/02/16/introducing-arrow-f... :
> Motivation: While standards like JDBC and ODBC have served users well for decades, they fall short for databases and clients which wish to use Apache Arrow or columnar data in general. Row-based APIs like JDBC or PEP 249 require transposing data in this case, and for a database which is itself columnar, this means that data has to be transposed twice—once to present it in rows for the API, and once to get it back into columns for the consumer. Meanwhile, while APIs like ODBC do provide bulk access to result buffers, this data must still be copied into Arrow arrays for use with the broader Arrow ecosystem, as implemented by projects like Turbodbc. Flight SQL aims to get rid of these intermediate steps.
> - One cannot create a trigger on a virtual table.
Just posted about eBPF a few days ago; opcodes have costs that are or are not costed: https://news.ycombinator.com/item?id=31688180
> - One cannot create additional indices on a virtual table. (Virtual tables can have indices but that must be built into the virtual table implementation. Indices cannot be added separately using CREATE INDEX statements.)
It looks like e.g. sqlite-parquet-vtable implements shadow tables to memoize row group filters. How does JOIN performance vary amongst sqlite virtual table implementations?
> - One cannot run ALTER TABLE ... ADD COLUMN commands against a virtual table.
Are there URIs in the schema? Mustn't there thus be a meta-schema that does e.g. nested structs with portable types [with URIs], (and jsonschema, [and W3C SHACL])?
/? sqlite arrow virtual table
- sqlite-parquet-vtable reads parquet with arrow for SQLite virtual tables https://github.com/cldellow/sqlite-parquet-vtable :
$ sqlite/sqlite3
-
Show HN: WarcDB: Web crawl data as SQLite databases
https://github.com/cldellow/sqlite-parquet-vtable
But for my use case virtual would be too complicated.
octosql
-
Wazero: Zero dependency WebAssembly runtime written in Go
Never got it to anything close to a finished state, instead moving on to doing the same prototype in llvm and then cranelift.
That said, here's some of the wazero-based code on a branch - https://github.com/cube2222/octosql/tree/wasm-experiment/was...
It really is just a very very basic prototype.
- Analyzing multi-gigabyte JSON files locally
-
DuckDB: Querying JSON files as if they were tables
This is really cool!
With their Postgres scanner[0] you can now easily query multiple datasources using SQL and join between them (i.e. Postgres table with JSON file). Something I strived to build with OctoSQL[1] before.
It's amazing to see how quickly DuckDB is adding new features.
Not a huge fan of C++, which is right now used for authoring extensions, it'd be really cool if somebody implemented a Rust extension SDK, or even something like Steampipe[2] does for Postgres FDWs which would provide a shim for quickly implementing non-performance-sensitive extensions for various things.
Godspeed!
[0]: https://duckdb.org/2022/09/30/postgres-scanner.html
[1]: https://github.com/cube2222/octosql
[2]: https://steampipe.io
-
Show HN: ClickHouse-local – a small tool for serverless data analytics
Congrats on the Show HN!
It's great to see more tools in this area (querying data from various sources in-place) and the Lambda use case is a really cool idea!
I've recently done a bunch of benchmarking, including ClickHouse Local and the usage was straightforward, with everything working as it's supposed to.
Just to comment on the performance area though, one area I think ClickHouse could still possibly improve on - vs OctoSQL[0] at least - is that it seems like the JSON datasource is slower, especially if only a small part of the JSON objects is used. If only a single field of many is used, OctoSQL lazily parses only that field, and skips the others, which yields non-trivial performance gains on big JSON files with small queries.
Basically, for a query like `SELECT COUNT(*), AVG(overall) FROM books.json` with the Amazon Review Dataset, OctoSQL is twice as fast (3s vs 6s). That's a minor thing though (OctoSQL will slow down for more complicated queries, while for ClickHouse decoding the input is and remains the bottleneck).
[0]: https://github.com/cube2222/octosql
-
Steampipe – Select * from Cloud;
To add somewhat of a counterpoint to the other response, I've tried the Steampipe CSV plugin and got 50x slower performance vs OctoSQL[0], which is itself 5x slower than something like DataFusion[1]. The CSV plugin doesn't contact any external API's so it should be a good benchmark of the plugin architecture, though it might just not be optimized yet.
That said, I don't imagine this ever being a bottleneck for the main use case of Steampipe - in that case I think the APIs themselves will always be the limiting part. But it does - potentially - speak to what you can expect if you'd like to extend your usage of Steampipe to more than just DevOps data.
[0]: https://github.com/cube2222/octosql
[1]: https://github.com/apache/arrow-datafusion
Disclaimer: author of OctoSQL
-
Go runtime: 4 years later
Actually, folks just use gRPC or Yaegi in Go.
See Terraform[0], Traefik[1], or OctoSQL[2].
Although I agree plugins would be welcome, especially for performance reasons, though also to be able to compile and load go code into a running go process (JIT-ish).
[0]: https://github.com/hashicorp/terraform
[1]: https://github.com/traefik/traefik
[2]: https://github.com/cube2222/octosql
Disclaimer: author of OctoSQL
- Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
-
Beginner interested in learning SQL. Have a few question that I wasn’t able to find on google.
Through more magic, you COULD of course use stuff like Spark, or easier with programs like TextQL, sq, OctoSQL.
-
How I Used DALL·E 2 to Generate The Logo for OctoSQL
The logo was created for OctoSQL and in the article you can find a lot of sample phrase-image combinations, as it describes the whole path (generation, variation, editing) I went down. Let me know what you think!
-
How I Used DALL·E 2 to Generate the Logo for OctoSQL
Hey, author here, happy to answer any questions!
The logo was created for OctoSQL[0] and in the article you can find a lot of sample phrase-image combinations, as it describes the whole path (generation, variation, editing) I went down. Let me know what you think!
[0]:https://github.com/cube2222/octosql
What are some alternatives?
duckdb - DuckDB is an in-process SQL OLAP Database Management System
WarcDB - WarcDB: Web crawl data as SQLite databases.
q - q - Run SQL directly on delimited files and multi-file sqlite databases
zsv - zsv+lib: tabular data swiss-army knife CLI + world's fastest (simd) CSV parser
trdsql - CLI tool that can execute SQL queries on CSV, LTSV, JSON, YAML and TBLN. Can output to various formats.
sqlite_protobuf - A SQLite extension for extracting values from serialized Protobuf messages
sqlitebrowser - Official home of the DB Browser for SQLite (DB4S) project. Previously known as "SQLite Database Browser" and "Database Browser for SQLite". Website at:
visidata - A terminal spreadsheet multitool for discovering and arranging data
sqlite-utils - Python CLI utility and library for manipulating SQLite databases
datasette - An open source multi-tool for exploring and publishing data
textql - Execute SQL against structured text like CSV or TSV