nio
miller
nio | miller | |
---|---|---|
7 | 63 | |
32 | 8,559 | |
- | - | |
6.3 | 9.0 | |
11 days ago | 9 days ago | |
Nim | Go | |
ISC License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nio
-
GNU Parallel, where have you been all my life?
Sure. No problem.
Even Windows has popen these days. There are some tiny popenr/popenw wrappers in https://github.com/c-blake/cligen/blob/master/cligen/osUt.ni...
Depending upon how balanced work is on either side of the pipe, you usually can even get parallel speed-up on multicore with almost no work. For example, there is no need to use quote-escaped CSV parsing libraries when you just read from a popen()d translator program producing an easier format: https://github.com/c-blake/nio/blob/main/utils/c2tsv.nim
-
Offensive Nim
I think it's a pretty nice prog.lang. You may be very happy. Though nothing is perfect, there is much to recommend it. By now I've written over 150 command-line tools with https://github.com/c-blake/cligen . A few are at https://github.com/c-blake/bu or https://github.com/c-blake/nio (screw 1970s COBOL-esque SQL) or in their own repos.
If it helps, I like to use the "mob branch" [0] of TinyCC/tcc [1] for really fast builds in debugging mode, but this may only work if you toss `@if tcc: mm:markAndSweep @end` or similar in your nim.cfg. Then I have a little `@if r: ...` so I can say `nim c -d:r foo` for a release build with gcc/whatever.
[0] https://repo.or.cz/w/tinycc.git
[1] https://en.wikipedia.org/wiki/Tiny_C_Compiler
-
Modernizing AWK, a 45-year old language, by adding CSV support
When you have "format wars", the best idea is usually to have a converter program change to the easiest to work with format - unless this incurs a massive expansion in space as per some image/video formats.
With CSV-like data, bulk conversion from quoted-escaped RFC4180 CSV to a simpler-to-parse format is the best plan for several reasons. First, it may "catch on", help Microsoft/R/whoever embrace the format and in doing so squash many bugs written by "data analyst/scientist coders". Second, in a shell a|b run programs a & b in parallel on multi-core and allow things like csv2x|head -n10000|b. Third, bulk conversion to a random access file where literal delimiters cannot occur as non-delimiters allows trivial file segmentation to be nCores times faster (under often satisfied assumptions). There are some D tools for this in https://github.com/eBay/tsv-utils and a much smaller stand-alone Nim tool https://github.com/c-blake/nio/blob/main/utils/c2tsv.nim . Optional quoting was always going to be a PITA due to its non-locality. What if there is no quote anywhere? Fourth, by using a program as the unit of modularity in this case, you make things programming language agnostic. Someone could go to town and write a pure SIMD/AVX512 converter in assembly even and solve the problem "once and for all" on a given CPU. The problem is actually just simple enough that this smells possible.
I am unaware of any "document" that "standardizes" this escaped/lossless TSV format. { Maybe call it "DSV" for delimiter separated values where "delimiters actually separate"? } Someone want to write an RFC or point to one? It can be just as "general/lossless" (see https://news.ycombinator.com/item?id=31352170).
Of course, if you are going to do a lot of data processing against some data, it is even better to parse all the way to down to binary so that you never have to parse again (Well, unless you call CPUs loading registers "parsing") which is what database systems have been doing since the 1960s.
-
Unix command line conventions over time
With https://github.com/c-blake/nio/blob/main/utils/catz.nim you can get similar format agnostic decoding/decompression not just in tar but in any pipeline context (based on magic numbers, not filename extensions and even doing the copy loop needed for unseekable inputs to replace the early read).
-
Show HN: Prig – like Awk, but uses Go for “scripting”
Part of the value prop is that you can directly author/access any library code in your native prog.lang in your query as naturally as native imports and routine calls. Awk is pretty established and for trivial one-liners you may not need any libs, but there are a lot more Go/Python/etc. libs than awk libs, AFAICT.
Beyond lib existence there is lib API understanding. There is not only language learning to be saved, but also lib ecosystem learning. I'm sure 'perl -na' folks could rattle off dozens of examples with CPAN package PDQ. I know many people who would say learning the ecosystem is what takes the most time...
There is also little barrier to using the same approach to also "compile" your data as well as the code. This is basically how 'nio qry' works as a kind of open architecture ghetto DB. [1] I mean, yeah, Big Iron, corporate database XYZ probably also has some ugly, non-portable extension language with 1970s prog.lang aesthetics that also sacrifices much value of SQL standardization.
[1] https://github.com/c-blake/nio
-
Fast CSV Processing with SIMD
I get ~50% the speed of the article's variant with no SIMD at all in https://github.com/c-blake/nio/blob/main/utils/c2tsv.nim
While in Nim, the main logic is really just bout 1 screenful for me and should not be so hard to follow.
As commented elsewhere, but bearing repeating, a better approach is to bulk convert text to binary and then operate off of that. One feature you get is fixed sized rows and thus random access to rows without an index. You can even mmap the file and cast it to a struct pointer if you like (though you need the struct pointer to be the right type). When DBs or DB-ish file formats are faster, being in binary is the 0th order reason why.
The main reason not to do this is if you have no disk space for it.
miller
- Qsv: Efficient CSV CLI Toolkit
-
jq 1.7 Released
jq and miller[1] are essential parts of my toolbelt, right up there with awk and vim.
[1]: https://github.com/johnkerl/miller
-
Perl first commit: a “replacement” for Awk and sed
> This works really well if your problem can be solved in one or two liners.
My personal comfort threshold is around the 100-line mark. It's even possible to write maintainable shell scripts up to 500 lines, but it mostly depends on the problem you're trying to solve, and the discipline of the programmer to follow best practices (use sane defaults, ShellCheck, etc.).
> It go bad very quickly when, say, you have two CSV files and want to join them the sql-way.
In that case we're talking about structured data, and, yeah, Perl or Python would be easier to work with. That said, depending on the complexity of the CSV, you can still go a long way with plain Bash with IFS/read(1) or tr(1) to split CSV columns. This wouldn't be very robust, but there are tools that handle CSV specifically[1], which can be composed in a shell script just fine.
So it's always a balancing act of being productive quickly with a shell script, or reaching out for a programming language once the tools aren't a good fit, or maintenance becomes an issue.
[1]: https://miller.readthedocs.io/
-
Need help on cleaning this data!!
where mlr is from https://github.com/johnkerl/miller
-
Running weekly average
if this class of problems (i.e., csv/tsv data) is your main target you may find miller (https://github.com/johnkerl/miller) much more useful in the long run
-
GQL: A new SQL like query language for .git files written in Rust
That said, you may be interested in Miller (https://github.com/johnkerl/miller) which provides similar capabilities for CSV, JSON, and XML files. It doesn't use a SQL grammar, but that's just the proverbial lipstick on the thing. I'm not the author, but I have used it and I see some parallels in use cases at the very least.
- johnkerl/miller: Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON
-
Any cli utility to create ascii/org mode tables?
worth giving Miller a shot
-
I wrote this iCalendar (.ics) command-line utility to turn common calendar exports into more broadly compatible CSV files.
CSV utilities (still haven't pick a favorite one...): https://github.com/harelba/q https://github.com/BurntSushi/xsv https://github.com/wireservice/csvkit https://github.com/johnkerl/miller
- Miller: Like Awk, sed, cut, join, and sort for CSV, TSV, and tabular JSON
What are some alternatives?
jbang - Unleash the power of Java - JBang Lets Students, Educators and Professional Developers create, edit and run self-contained source-only Java programs with unprecedented ease.
visidata - A terminal spreadsheet multitool for discovering and arranging data
zsv - zsv+lib: tabular data swiss-army knife CLI + world's fastest (simd) CSV parser
xsv - A fast CSV command line toolkit written in Rust.
procs - Unix process&system query&format lib&multi-command CLI in Nim
jq - Command-line JSON processor [Moved to: https://github.com/jqlang/jq]
tsv-utils - eBay's TSV Utilities: Command line tools for large, tabular data files. Filtering, statistics, sampling, joins and more.
dasel - Select, put and delete data from JSON, TOML, YAML, XML and CSV files with a single tool. Supports conversion between formats and can be used as a Go package.
DataProfiler - What's in your data? Extract schema, statistics and entities from datasets
csvtk - A cross-platform, efficient and practical CSV/TSV toolkit in Golang
goawk - A POSIX-compliant AWK interpreter written in Go, with CSV support
yq - yq is a portable command-line YAML, JSON, XML, CSV, TOML and properties processor