Apache Calcite
q
Apache Calcite | q | |
---|---|---|
28 | 46 | |
4,368 | 10,126 | |
1.1% | - | |
9.0 | 2.1 | |
6 days ago | 16 days ago | |
Java | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Apache Calcite
-
Data diffs: Algorithms for explaining what changed in a dataset (2022)
> Make diff work on more than just SQLite.
Another way of doing this that I've been wanting to do for a while is to implement the DIFF operator in Apache Calcite[0]. Using Calcite, DIFF could be implemented as rewrite rules to generate the appropriate SQL to be directly executed against the database or the DIFF operator can be implemented outside of the database (which the original paper shows is more efficient).
[0] https://calcite.apache.org/
-
Apache Baremaps: online maps toolkit
Yes, planetiler rocks and the memory mapped collections enabled us to remove our dependency to rocksdb.
From my perspective, planetiler started as an effort to generate vector tiles from the OpenMapTile schema as fast as possible (pbf -> mvt). By contrast, Baremaps started as an effort to create a new schema and style from the ground up. In this regard, having a database (pbf -> db <- mvt) enables to live reload changes made in the configuration files. The database has a cost, but also comes with additional advantages (updates, dynamic data, generation of tiles at zoom levels 16+, etc.).
That being said, I think the two projects overlap and I hope we will find opportunities to collaborate in the future. For instance, whereas PostgreSQL is still required in Baremaps, I recently ported a lot of the ST_ function of Postgis to Apache Calcite with the intent to execute SQL on fast memory mapped collection.
https://github.com/apache/calcite/blob/main/core/src/main/ja...
A planet wide import in Postgis currently takes about 4 hours with the COPY API (easy to parallelize) followed by about 12 hours of simplification in Postgis (not easy to parallelize). I will try to publish a detailed benchmark in the future.
-
How to manipulate SQL string programmatically?
Use a SQL Parser like sqlglot or Apache Calcite to compile user's query into an AST.
- Can SQL be used without an RDBMS?
- Apache Calcite
- Want to contribute more to open source projects.
-
CITIC Industrial Cloud — Apache ShardingSphere Enterprise Applications
The SQL Federation engine contains processes such as SQL Parser, SQL Binder, SQL Optimizer, Data Fetcher and Operator Calculator, suitable for dealing with co-related queries and subqueries cross multiple database instances. At the underlying layer, it uses Calcite to implement RBO (Rule Based Optimizer) and CBO (Cost Based Optimizer) based on relational algebra, and query the results through the optimal execution plan.
-
Postgres wire compatible SQLite proxy
Awesome to see work in the DB wire compatible space. On the MySQL side, there was MySQL Proxy (https://github.com/mysql/mysql-proxy), which was scriptable with Lua, with which you could create your own MySQL wire compatible connections. Unfortunately it appears to have been abandoned by Oracle and IIRC doesn't work with 5.7 and beyond. I used it in the past to hack together a MySQL wire adapter for Interana (https://scuba.io/).
I guess these days the best approach for connecting arbitrary data sources to existing drivers, at least for OLAP, is Apache Calcite (https://calcite.apache.org/). Unfortunately that feels a little more involved.
-
Launch HN: Hydra (YC W22) – Query Any Database via Postgres
For anyone interested, Apache Calcite[0] is an open source data management framework which seems to do many of the same things that Hydra claims to do, but taking a different approach. Operating as a Java library, Calcite contains "adapters" to many different data sources from existing JDBC connectors to Elasticsearch to Cassandra. All of these different data sources can be joined together as desired. Calcite also has it's own optimizer which is able to push down relevant parts of the query to the different data sources. However, you get full SQL on data sources which don't support it, with Calcite executing the remaining bits itself.
Unfortunately, I would not be too surprised if Calcite was found to be less performance-optimized than Hydra. That said, there are users of Calcite at Google, Uber, Spotify, and others who have made great use of various parts of the framework.
[0] https://calcite.apache.org/
-
Anyone know of any software that can help in designing then outputting to various database
Abstraction Layer - You can use something like Calcite to abstract out your data storage. https://calcite.apache.org/
q
-
I wrote this iCalendar (.ics) command-line utility to turn common calendar exports into more broadly compatible CSV files.
CSV utilities (still haven't pick a favorite one...): https://github.com/harelba/q https://github.com/BurntSushi/xsv https://github.com/wireservice/csvkit https://github.com/johnkerl/miller
- Segítség kérés Excel automatizáláshoz
-
Show HN: ClickHouse-local – a small tool for serverless data analytics
I think they're talking about https://github.com/harelba/q, which is not very fast.
-
sqly - execute SQL against CSV / JSON with shell
Apparently, there were many who thought the same thing; Tools to execute SQL against CSV were trdsql, q, csvq, TextQL. They were highly functional, hoewver, had many options and no input completion. I found it just a little difficult to use.
-
Q – Run SQL Directly on CSV or TSV Files
Hi, author of q here.
Regarding the error you got, q currently does not autodetect headers, so you'd need to add -H as a flag in order to use the "country" column name. You're absolutely correct on failing-fast here - It's a bug which i'll fix.
In general regarding speed - q supports automatic caching of the CSV files (through the "-C readwrite" flag). Once it's activated, it will write the data into another file (with a .qsql extension), and will use it automatically in further queries in order to speed things considerably.
Effectively, the .qsql files are regular sqlite3 files (with some metadata), and q can be used to query them directly (or any regular sqlite3 file), including the ability to seamlessly join between multiple sqlite3 files.
http://harelba.github.io/q/#auto-caching-examples
- PostgreSQL alternative for Large amounts of data
-
q VS trdsql - a user suggested alternative
2 projects | 25 Jun 2022
- One-liner for running queries against CSV files with SQLite
What are some alternatives?
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
textql - Execute SQL against structured text like CSV or TSV
ANTLR - ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.
csvq - SQL-like query language for csv
Presto - The official home of the Presto distributed SQL query engine for big data
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
JSqlParser - JSqlParser parses an SQL statement and translate it into a hierarchy of Java classes. The generated hierarchy can be navigated using the Visitor Pattern
InquirerPy - :snake: Python port of Inquirer.js (A collection of common interactive command-line user interfaces)
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
xsv - A fast CSV command line toolkit written in Rust.
Apache Drill - Apache Drill is a distributed MPP query layer for self describing data
ledger - Double-entry accounting system with a command-line reporting interface