spyql
prql
Our great sponsors
spyql | prql | |
---|---|---|
23 | 106 | |
902 | 9,427 | |
- | 2.7% | |
0.0 | 9.9 | |
over 1 year ago | 6 days ago | |
Jupyter Notebook | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spyql
-
Fq: Jq for Binary Formats
I prefer a SQL-like format. It’s not as complete but it cover most of the day-to-day use cases. Take a look at https://github.com/dcmoura/spyql (I am the author). Congrats on fq!
-
Command-line data analytics made easy with SPyQL
SPyQL documentation: spyql.readthedocs.io
-
This Week In Python
spyql – Query data on the command line with SQL-like SELECTs powered by Python expressions
- Command-line data analytics made easy
-
Jc – JSONifies the output of many CLI tools
This is great!
I am the author of SPyQL [1]. Combining JC with SPyQL you can easily query the json output and run python commands on top of it from the command-line :-) You can do aggregations and so forth in a much simpler and intuitive way than with jq.
I just wrote a blogpost [2] that illustrates it. It is more focused on CSV, but the commands would be the same if you were working with JSON.
[1] https://github.com/dcmoura/spyql
- The fastest command-line tools for querying large JSON datasets
-
Working with more than 10gb csv
You can import the data into a PostgreSQL/MySQL/SQLite/... database and then query the database. However, even with the right choice of indexes, it might take a while to run queries on a table with hundreds of millions of records. You can easily import your data to these databases with SpyQL: $ spyql "SELECT * FROM csv TO sql(table=my_table_name) | sqlite3 my.db" (you would need to create the table my_table_name before running the command).
-
ClickHouse Cloud is now in Public Beta
https://github.com/dcmoura/spyql/blob/master/notebooks/json_...
And ClickHouse looks like a normal relational database - there is no need for multiple components for different tiers (like in Druid), no need for manual partitioning into "daily", "hourly" tables (like you do in Spark and Bigquery), no need for lambda architecture... It's refreshing how something can be both simple and fast.
- A SQLite extension for reading large files line-by-line
-
I want to convert a large JSON file into Tabular Format.
I thought this library was pretty nifty for json. It's also relatively fast compared to most json parsers: https://github.com/dcmoura/spyql
prql
- Prolog language for PostgreSQL proof of concept
-
SQL is syntactic sugar for relational algebra
> I completely attribute this to SQL being difficult or "backwards" to parse. I mean backwards in the way that in SQL you start with what you want first (the SELECT) rather than what you have and widdling it down.
> The turning point for me was to just accept SQL for what it is.
Or just write PRQL and compile it to SQL
https://github.com/PRQL/prql
- Transpile Any SQL to PostgreSQL Dialect
-
Show HN: Open-source, browser-local data exploration using DuckDB-WASM and PRQL
Hey HN! We’ve built Pretzel, an open-source data exploration and visualization tool that runs fully in the browser and can handle large files (200 MB CSV on my 8gb MacBook air is snappy). It’s also reactive - so if, for example, you change a filter, all the data transform blocks after it re-evaluate automatically. You can try it here: https://pretzelai.github.io/ (static hosted webpage) or see a demo video here: https://www.youtube.com/watch?v=73wNEun_L7w
You can play with the demo CSV that’s pre-loaded (GitHub data of text-editor adjacent projects) or upload your own CSV/XLSX file. The tool runs fully in-browser—you can disconnect from the internet once the website loads—so feel free to use sensitive data if you like.
Here’s how it works: You upload a CSV file and then, explore your data as a series of successive data transforms and plots. For example, you might: (1) Remove some columns; (2) Apply some filters (remove nulls, remove outliers, restrict time range etc); (3) Do a pivot (i.e, a group-by but fancier); (4) Plot a chart; (5) Download the chart and the the transformed data. See screenshot: https://imgur.com/a/qO4yURI
In the UI, each transform step appears as a “Block”. You can always see the result of the full transform in a table on the right. The transform blocks are editable - for instance in the example above, you can go to step 2, change some filters and the reactivity will take care of re-computing all the cells that follow, including the charts.
We wanted Pretzel to run locally in the browser and be extremely performant on large files. So, we parse CSVs with the fastest CSV parser (uDSV: https://github.com/leeoniya/uDSV) and use DuckDB-Wasm (https://github.com/duckdb/duckdb-wasm) to do all the heavy lifting of processing the data. We also wanted to allow for chained data transformations where each new block operates on the result of the previous block. For this, we’re using PRQL (https://prql-lang.org/) since it maps 1-1 with chained data transform blocks - each block maps to a chunk of PRQL which when combined, describes the full data transform chain. (PRQL doesn’t support DuckDB’s Pivot statement though so we had to make some CTE based hacks).
There’s also an AI block: This is the only (optional) feature that requires an internet connection but we’re working on adding local model support via Ollama. For now, you can use your own OpenAI API key or use an AI server we provide (GPT4 proxy; it’s loaded with a few credits), specify a transform in plain english and get back the SQL for the transform which you can edit.
Our roadmap includes allowing API calls to create new columns; support for an SQL block with nice autocomplete features, and a Python block (using Pyodide to run Python in the browser) on the results of the data transforms, much like a jupyter notebook.
There’s two of us and we’ve only spent about a week coding this and fixing major bugs so there are still some bugs to iron out. We’d love for you to try this and to get your feedback!
-
Pql, a pipelined query language that compiles to SQL (written in Go)
> Looks like PRQL doesn't have a Go library so I guess they just really wanted something in Go?
There's some C bindings and the example in the README shows integration with Go:
https://github.com/PRQL/prql/tree/main/prqlc/bindings/prqlc-...
- FLaNK Stack 26 February 2024
- FLaNK Stack Weekly 19 Feb 2024
-
PRQL as a DuckDB Extension
Can someone tell me why PRQL is better? I went here: https://github.com/PRQL/prql
It looks nice, but what's the strengths compared to SQL?
-
Shouldn't FROM come before SELECT in SQL?
PRQL [1] is a compile-to-SQL relational querying language that puts FROM first.
[1] https://prql-lang.org
-
Vanna.ai: Chat with your SQL database
https://prql-lang.org/ might be an answer for this. As a cross-database pipelined language, it would allow RAG to be intermixed with the query, and the syntax may(?) be more reliable to generate
What are some alternatives?
malloy - Malloy is an experimental language for describing data relationships and transformations.
tresql - Shorthand SQL/JDBC wrapper language, providing nested results as JSON and more
Preql - An interpreted relational query language that compiles to SQL.
bustub - The BusTub Relational Database Management System (Educational)
prosto - Prosto is a data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby
pxi - 🧚 pxi (pixie) is a small, fast, and magical command-line data processor similar to jq, mlr, and awk.
toydb - Distributed SQL database in Rust, written as a learning project
partiql-lang-kotlin - PartiQL libraries and tools in Kotlin.
rfcs - RFCs for major changes to EdgeDB