spyql
jq
spyql | jq | |
---|---|---|
23 | 306 | |
902 | 25,063 | |
- | - | |
0.0 | 0.0 | |
over 1 year ago | 11 months ago | |
Jupyter Notebook | C | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spyql
-
Fq: Jq for Binary Formats
I prefer a SQL-like format. It’s not as complete but it cover most of the day-to-day use cases. Take a look at https://github.com/dcmoura/spyql (I am the author). Congrats on fq!
-
Command-line data analytics made easy with SPyQL
SPyQL documentation: spyql.readthedocs.io
-
This Week In Python
spyql – Query data on the command line with SQL-like SELECTs powered by Python expressions
- Command-line data analytics made easy
-
Jc – JSONifies the output of many CLI tools
This is great!
I am the author of SPyQL [1]. Combining JC with SPyQL you can easily query the json output and run python commands on top of it from the command-line :-) You can do aggregations and so forth in a much simpler and intuitive way than with jq.
I just wrote a blogpost [2] that illustrates it. It is more focused on CSV, but the commands would be the same if you were working with JSON.
[1] https://github.com/dcmoura/spyql
- The fastest command-line tools for querying large JSON datasets
-
Working with more than 10gb csv
You can import the data into a PostgreSQL/MySQL/SQLite/... database and then query the database. However, even with the right choice of indexes, it might take a while to run queries on a table with hundreds of millions of records. You can easily import your data to these databases with SpyQL: $ spyql "SELECT * FROM csv TO sql(table=my_table_name) | sqlite3 my.db" (you would need to create the table my_table_name before running the command).
-
ClickHouse Cloud is now in Public Beta
https://github.com/dcmoura/spyql/blob/master/notebooks/json_...
And ClickHouse looks like a normal relational database - there is no need for multiple components for different tiers (like in Druid), no need for manual partitioning into "daily", "hourly" tables (like you do in Spark and Bigquery), no need for lambda architecture... It's refreshing how something can be both simple and fast.
- A SQLite extension for reading large files line-by-line
-
I want to convert a large JSON file into Tabular Format.
I thought this library was pretty nifty for json. It's also relatively fast compared to most json parsers: https://github.com/dcmoura/spyql
jq
-
GNU Parallel, where have you been all my life?
That should recursively list directories, counting only the files within each, and output² jsonl that can be further mangled within the shell². You could just as easily populate an associative array for further work, or $whatever. Unlike bash, zsh has reasonable behaviour around quoting and whitespace too.
¹ https://zsh.sourceforge.io/Doc/Release/User-Contributions.ht...
² https://github.com/jpmens/jo
³ https://github.com/stedolan/jq
- How do i edit reputation?
-
Jj: JSON Stream Editor
What I miss from jq and what is implemented but unreleased is platform independent line delimiters.
jq on Windows produces \r\n terminated lines which can be annoying when used with Cygwin / MSYS2 / WSL. The '--binary' option to not convert line delimiters is one of those pending improvements.
https://github.com/stedolan/jq/commit/0dab2b18d73e561f511801...
-
Building and deploying a web API powered by ChatGPT
If you have jq installed you can use it to make the output look nicer.
-
Search in your Jupyter notebooks from the CLI, fast.
It requires jq for JSON processing and GNU parallel for concurrent searches in the notebooks.
- Check the jq manual!
- mkv vs mp4 metadata
-
Amazon Begs Employees Not to Leak Corporate Secrets to ChatGPT
jq is your friend.
- Memes are all cool and all. But this is your daily remaining that 10000! =
-
How to export/import/externally-edit/whatever WI entries?
The jq command (https://stedolan.github.io/jq/) is useful pulling that information out.
What are some alternatives?
prql - PRQL is a modern language for transforming data — a simple, powerful, pipelined SQL replacement
yq - Command-line YAML, XML, TOML processor - jq wrapper for YAML/XML/TOML documents
malloy - Malloy is an experimental language for describing data relationships and transformations.
dasel - Select, put and delete data from JSON, TOML, YAML, XML and CSV files with a single tool. Supports conversion between formats and can be used as a Go package.
tresql - Shorthand SQL/JDBC wrapper language, providing nested results as JSON and more
gojq - Pure Go implementation of jq
Preql - An interpreted relational query language that compiles to SQL.
json5 - JSON5 — JSON for Humans
prosto - Prosto is a data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby
jp - Validate and transform JSON with Bash
pxi - 🧚 pxi (pixie) is a small, fast, and magical command-line data processor similar to jq, mlr, and awk.
nushell - A new type of shell