adix
generate-random-numbers | adix | |
---|---|---|
1 | 4 | |
0 | 38 | |
- | - | |
0.0 | 7.2 | |
about 3 years ago | 11 days ago | |
Zig | Nim | |
- | ISC License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
generate-random-numbers
-
Performance comparison: counting words in Python, Go, C++, C, Awk, Forth, Rust
Here is a similar excercise. Ad-hoc programs in different languages generate a long list of random numbers. How long does it take? https://github.com/posch/generate-random-numbers
adix
-
I/O is no longer the bottleneck
Note: Just concatenating the bibles keeps your hash map artificially small...which matters because as you correctly note the big deal is if you can fit the histogram in the L2 cache as noted elsewhere and this really matters if you go parallel where N CPUsL2 caches can speed things up a lot -- until* your histograms blow out CPU-private L2 cache sizes. https://github.com/c-blake/adix/blob/master/tests/wf.nim (or a port to your favorite lang) might make it easy to play with these ideas.
-
A Cost Model for Nim
which is notably logarithmic - not unlike a B-Tree.
When these expectations are exceeded you can at least detect a DoS attack. If you wait until such are seen, you can activate a "more random" mitigation on the fly at about the same cost as "the next resize/re-org/whatnot".
All you need to do is instrument your search to track the depth. There is some example such strategy in Nim at https://github.com/c-blake/adix for simple Robin-Hood Linear Probed tables.
-
Performance comparison: counting words in Python, Go, C++, C, Awk, Forth, Rust
Knuth-McIlroy comes up a lot. Previous discussion [1]. For this example I can make a Nim program run almost exactly the same speed as `wc -w`, yet the optimized C program runs 1.2x faster not 3.34x slower - a whopping 4x discrepancy - much bigger than many of the ratios in the table. So, people should be very cautious about conclusions from any of this.
[1] https://news.ycombinator.com/item?id=24817594
[2] https://github.com/c-blake/adix/blob/master/tests/wf.nim
What are some alternatives?
word_frequency_nim - The word frequency program, written in simple nim.
countwords - Playing with counting word frequencies (and performance) in various languages.
raikv - Persistent key value store, serverless shared memory caching
RAMCloud - **No Longer Maintained** Official RAMCloud repo
wordcount - Counting words in different programming languages.
KindleClippingsTranslator - Czytacz slowek
tiny_sqlite - A thin SQLite wrapper for Nim
CPython - The Python programming language
cligen - Nim library to infer/generate command-line-interfaces / option / argument parsing; Docs at
napkin-math - Techniques and numbers for estimating system's performance from first-principles