adix
huniq
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
adix
-
I/O is no longer the bottleneck
Note: Just concatenating the bibles keeps your hash map artificially small...which matters because as you correctly note the big deal is if you can fit the histogram in the L2 cache as noted elsewhere and this really matters if you go parallel where N CPUsL2 caches can speed things up a lot -- until* your histograms blow out CPU-private L2 cache sizes. https://github.com/c-blake/adix/blob/master/tests/wf.nim (or a port to your favorite lang) might make it easy to play with these ideas.
-
A Cost Model for Nim
which is notably logarithmic - not unlike a B-Tree.
When these expectations are exceeded you can at least detect a DoS attack. If you wait until such are seen, you can activate a "more random" mitigation on the fly at about the same cost as "the next resize/re-org/whatnot".
All you need to do is instrument your search to track the depth. There is some example such strategy in Nim at https://github.com/c-blake/adix for simple Robin-Hood Linear Probed tables.
-
Performance comparison: counting words in Python, Go, C++, C, Awk, Forth, Rust
Knuth-McIlroy comes up a lot. Previous discussion [1]. For this example I can make a Nim program run almost exactly the same speed as `wc -w`, yet the optimized C program runs 1.2x faster not 3.34x slower - a whopping 4x discrepancy - much bigger than many of the ratios in the table. So, people should be very cautious about conclusions from any of this.
[1] https://news.ycombinator.com/item?id=24817594
[2] https://github.com/c-blake/adix/blob/master/tests/wf.nim
huniq
-
Zet 1.0 is out (compare to uniq and comm)
How does it compare with huniq and runiq?
-
I/O is no longer the bottleneck
`sort | uniq` is really slow for this, as it has to sort the entire input first. I use `huniq` which is way faster for this. I'm sure there are many similar options.
https://github.com/koraa/huniq
-
What’s your favorite shell one liner?
For better speed, check out https://github.com/koraa/huniq
What are some alternatives?
countwords - Playing with counting word frequencies (and performance) in various languages.
fzy - :mag: A simple, fast fuzzy finder for the terminal
RAMCloud - **No Longer Maintained** Official RAMCloud repo
wordcount - Counting words in different programming languages.
repo
KindleClippingsTranslator - Czytacz slowek
napkin-math - Techniques and numbers for estimating system's performance from first-principles
tiny_sqlite - A thin SQLite wrapper for Nim
share-file-systems - Use a Windows/OSX like GUI in the browser to share files cross OS privately. No cloud, no server, no third party.
word_frequency_nim - The word frequency program, written in simple nim.
runiq - An efficient way to filter duplicate lines from input, à la uniq.