miller
structured-text-tools
Our great sponsors
miller | structured-text-tools | |
---|---|---|
63 | 13 | |
8,553 | 6,865 | |
- | - | |
9.1 | 8.1 | |
8 days ago | 23 days ago | |
Go | ||
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
miller
- Qsv: Efficient CSV CLI Toolkit
-
jq 1.7 Released
jq and miller[1] are essential parts of my toolbelt, right up there with awk and vim.
[1]: https://github.com/johnkerl/miller
-
Perl first commit: a “replacement” for Awk and sed
> This works really well if your problem can be solved in one or two liners.
My personal comfort threshold is around the 100-line mark. It's even possible to write maintainable shell scripts up to 500 lines, but it mostly depends on the problem you're trying to solve, and the discipline of the programmer to follow best practices (use sane defaults, ShellCheck, etc.).
> It go bad very quickly when, say, you have two CSV files and want to join them the sql-way.
In that case we're talking about structured data, and, yeah, Perl or Python would be easier to work with. That said, depending on the complexity of the CSV, you can still go a long way with plain Bash with IFS/read(1) or tr(1) to split CSV columns. This wouldn't be very robust, but there are tools that handle CSV specifically[1], which can be composed in a shell script just fine.
So it's always a balancing act of being productive quickly with a shell script, or reaching out for a programming language once the tools aren't a good fit, or maintenance becomes an issue.
[1]: https://miller.readthedocs.io/
-
Need help on cleaning this data!!
where mlr is from https://github.com/johnkerl/miller
-
Running weekly average
if this class of problems (i.e., csv/tsv data) is your main target you may find miller (https://github.com/johnkerl/miller) much more useful in the long run
-
GQL: A new SQL like query language for .git files written in Rust
That said, you may be interested in Miller (https://github.com/johnkerl/miller) which provides similar capabilities for CSV, JSON, and XML files. It doesn't use a SQL grammar, but that's just the proverbial lipstick on the thing. I'm not the author, but I have used it and I see some parallels in use cases at the very least.
- johnkerl/miller: Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON
-
Any cli utility to create ascii/org mode tables?
worth giving Miller a shot
-
I wrote this iCalendar (.ics) command-line utility to turn common calendar exports into more broadly compatible CSV files.
CSV utilities (still haven't pick a favorite one...): https://github.com/harelba/q https://github.com/BurntSushi/xsv https://github.com/wireservice/csvkit https://github.com/johnkerl/miller
- Miller: Like Awk, sed, cut, join, and sort for CSV, TSV, and tabular JSON
structured-text-tools
- Command line tools for manipulating structured text data
-
creating a text file in Linux
This works well in scripts and logs of all the commands you need to do to reproduce the current state of the system from a scratch install. Also can be used with diff -u and patch, sed, perl, and awk oneliners and structured text tools. You can also capture most of the commands using sudo logging feature but it won't capture the here documents. But for modest size files you can use newlines in echo commands. Note that commands which use redrection should use something like ~~~~ sudo bash -c "echo 'foo' >>file.txt" ~~~~ instead of "sudo echo foo >>file.txt" or "echo foo | sudo tee -a file.txt
-
Using Commandline to Process CSV Files
TFA is about how to handle csv files with awk. This might be useful in straightforward cases.
For all others I’d recommend to have a look at
https://github.com/dbohdan/structured-text-tools
which lists tools to handle structure text formats
-
Combine multiple files
in general, I'd pick something from https://github.com/dbohdan/structured-text-tools
- Show HN: Xq – command-line XML and HTML beautifier and content extractor
- structured-text-tools: A list of command line tools for manipulating structured text data
- A list of command line tools for manipulating structured text data
-
What is your favourite Linux backup software and why?
Also, here is a list of structured text tools. You may find some tools there that are helpful in editing configuration files from the command line. Or you can use "diff -u" to create a patch file (you need to save the patch files along with sudo.log) to recreate. Also, use sfdisk --dump and sfdisk --backup to save partition information in a form that can be used to recreate backups.
What are some alternatives?
visidata - A terminal spreadsheet multitool for discovering and arranging data
yq - yq is a portable command-line YAML, JSON, XML, CSV, TOML and properties processor
xsv - A fast CSV command line toolkit written in Rust.
tsv-utils - eBay's TSV Utilities: Command line tools for large, tabular data files. Filtering, statistics, sampling, joins and more.
jq - Command-line JSON processor [Moved to: https://github.com/jqlang/jq]
python-benedict - :blue_book: dict subclass with keylist/keypath support, built-in I/O operations (base64, csv, html, ini, json, pickle, plist, query-string, toml, xls, xml, yaml), s3 support and many utilities.
dasel - Select, put and delete data from JSON, TOML, YAML, XML and CSV files with a single tool. Supports conversion between formats and can be used as a Go package.
concise-encoding - The secure data format for a modern world
csvtk - A cross-platform, efficient and practical CSV/TSV toolkit in Golang
datasette - An open source multi-tool for exploring and publishing data
awesome-cli-apps - 🖥 📊 🕹 🛠 A curated list of command line apps