miller
ndjson.github.io
Our great sponsors
miller | ndjson.github.io | |
---|---|---|
63 | 17 | |
8,553 | 23 | |
- | - | |
9.1 | 0.0 | |
6 days ago | 9 months ago | |
Go | CSS | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
miller
- Qsv: Efficient CSV CLI Toolkit
-
jq 1.7 Released
jq and miller[1] are essential parts of my toolbelt, right up there with awk and vim.
[1]: https://github.com/johnkerl/miller
-
Perl first commit: a “replacement” for Awk and sed
> This works really well if your problem can be solved in one or two liners.
My personal comfort threshold is around the 100-line mark. It's even possible to write maintainable shell scripts up to 500 lines, but it mostly depends on the problem you're trying to solve, and the discipline of the programmer to follow best practices (use sane defaults, ShellCheck, etc.).
> It go bad very quickly when, say, you have two CSV files and want to join them the sql-way.
In that case we're talking about structured data, and, yeah, Perl or Python would be easier to work with. That said, depending on the complexity of the CSV, you can still go a long way with plain Bash with IFS/read(1) or tr(1) to split CSV columns. This wouldn't be very robust, but there are tools that handle CSV specifically[1], which can be composed in a shell script just fine.
So it's always a balancing act of being productive quickly with a shell script, or reaching out for a programming language once the tools aren't a good fit, or maintenance becomes an issue.
[1]: https://miller.readthedocs.io/
-
Need help on cleaning this data!!
where mlr is from https://github.com/johnkerl/miller
-
Running weekly average
if this class of problems (i.e., csv/tsv data) is your main target you may find miller (https://github.com/johnkerl/miller) much more useful in the long run
-
GQL: A new SQL like query language for .git files written in Rust
That said, you may be interested in Miller (https://github.com/johnkerl/miller) which provides similar capabilities for CSV, JSON, and XML files. It doesn't use a SQL grammar, but that's just the proverbial lipstick on the thing. I'm not the author, but I have used it and I see some parallels in use cases at the very least.
- johnkerl/miller: Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON
-
Any cli utility to create ascii/org mode tables?
worth giving Miller a shot
-
I wrote this iCalendar (.ics) command-line utility to turn common calendar exports into more broadly compatible CSV files.
CSV utilities (still haven't pick a favorite one...): https://github.com/harelba/q https://github.com/BurntSushi/xsv https://github.com/wireservice/csvkit https://github.com/johnkerl/miller
- Miller: Like Awk, sed, cut, join, and sort for CSV, TSV, and tabular JSON
ndjson.github.io
-
What the fuck
However, since every JSON document can be represented in a single line, something like newline-delimited JSON / JSON Lines feels like it would've been more suitable for that kind of data.
- The XML spec is 25 years old today
-
Consider Using CSV
No one uses that format for streamed json, see ndson and jsonl
http://ndjson.org/
The size complaint is overblown, as repeated fields are compressed away.
As other folks rightfully commented, csv is a mine field. One should assume every CSV file is broken in some way. They also don't enumerate any of the downsides of CSV.
What people should consider is using formats like Avro or Parquet that carry their schema with them so the data can be loaded and analyzed without have to manually deal with column meaning.
-
DevTool Intro: The Algolia CLI!
What is ndjson? Newline delimited JSON is the format the Algolia CLI reads from and writes to files. This means that any command that passes ndjson formatted data as output or accepts it as input can be piped together with an Algolia CLI command! We’ll see more of this in the next example
-
On read of JSON file it loads the entire JSON into memory.
You might consider using json-lines format (also known as newline-delimited JSON), in which each line is a separate JSON document so they can be loaded individually.
-
How to format it as json?
The format you're getting is known as Newline-Delimited JSON. Instead of trying to parse the whole input and pass that to the JSON Decoder, you can use something like bufio.Scanner to get and parse it line by line.
-
Arrow2 0.12.0 released - including almost complete support for Parquet
This is in oposition to NDJSON, which allows to split records without deserializing JSON itself, via e.g. read_lines. fwiw CSV suffers from the same problem as JSON - generally not possible to break into records without deserializing. It is worse than NDJSON because the character \n may appear at any position within an item, thus forbidding read_lines.
-
Processing large JSON files in Python without running out of memory
I've always seen it referred to as ndjson
-
Speeding up Go's builtin JSON encoder up to 55% for large arrays of objects
I think this would be fine, as long as the CSV layer was still parsable using the RFC 4180, then you could still use a normal CSV parser to parse the CSV layer and a normal JSON parser to parse the JSON layer. My worry with your example is that it is nether format, so it will need custom serialisation and deserialisation logic as it is essentially a bran new format.
https://datatracker.ietf.org/doc/html/rfc4180
If you’re looking for line-oriented JSON, another option would be ndjson: http://ndjson.org/
- IETF should keep XMPP as IM standard, instead of Matrix
What are some alternatives?
visidata - A terminal spreadsheet multitool for discovering and arranging data
ndjson - Streaming line delimited json parser + serializer
xsv - A fast CSV command line toolkit written in Rust.
flatten-tool - Tools for generating CSV and other flat versions of the structured data
jq - Command-line JSON processor [Moved to: https://github.com/jqlang/jq]
babashka - A Clojure babushka for the grey areas of Bash (native fast-starting Clojure scripting environment) [Moved to: https://github.com/babashka/babashka]
dasel - Select, put and delete data from JSON, TOML, YAML, XML and CSV files with a single tool. Supports conversion between formats and can be used as a Go package.
datasette - An open source multi-tool for exploring and publishing data
csvtk - A cross-platform, efficient and practical CSV/TSV toolkit in Golang
grop - helper script for the `gron | grep | gron -u` workflow
yq - yq is a portable command-line YAML, JSON, XML, CSV, TOML and properties processor
csv2sqlite