crumsort
ram_bench
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
crumsort
-
Blitsort: An ultra-fast in-place stable hybrid merge/quick sort
Blitsort is a hybrid quicksort, see title.
It is slower than it's unstable brother, aptly named crumsort. https://github.com/scandum/crumsort
- Crumsort: Introduction to a new unstable sorting algorithm faster than pdqsort
- 380 points in 6 hours
- Crumsort: Introduction to a new sorting algorithm faster than pdqsort
-
Go will use pdqsort in the next release
https://github.com/scandum/crumsort claims better performance than pdqsort
-
Changing std:sort at Google’s Scale and Beyond
Any chance you could comment on fluxsort[0], another fast quicksort? It's stable and uses a buffer about the size of the original array, which sounds like it puts it in a similar category as glidesort. Benchmarks against pdqsort at the end of that README; I can verify that it's faster on random data by 30% or so, and the stable partitioning should mean it's at least as adaptive (but the current implementation uses an initial analysis pass followed by adaptive mergesort rather than optimistic insertion sort to deal with nearly-sorted data, which IMO is fragile). There's an in-place effort called crumsort along similar lines, but it's not stable.
I've been doing a lot of work on sorting[2], in particular working to hybridize various approaches better. Very much looking forward to seeing how glidesort works.
[0] https://github.com/scandum/fluxsort
[1] https://github.com/scandum/crumsort
[2] https://mlochbaum.github.io/BQN/implementation/primitive/sor...
ram_bench
- The Myth of RAM (2014)
-
Blitsort: An ultra-fast in-place stable hybrid merge/quick sort
> Radix sort is theoretically O(N),
Nothing theoretical about it: Sorting a list of all IP addresses can absolutely and trivially be done in O(N)
> in reality you can't do better than O(log N)
You can't traverse the list once in any sort must be ≥N.
> but memory access is logarithmic
No it's not, but it's also irrelevant: A radix sort doesn't need any reads if the values are unique and dense (such as the case IP addresses, permutation arrays, and so on).
> Edit: I misremembered, memory access is actually O(sqrt(N)): https://github.com/emilk/ram_bench
It's not that either.
The author ran out of memory; They ran a program that needs 10GB of ram on a machine with only 8GB of ram in it. If you give that program enough memory (I have around 105gb free) it produces a silly graph that looks nothing like O(√N): https://imgur.com/QjegDVI
The latency of accessing memory is not a function of N.
What are some alternatives?
fluxsort - A fast branchless stable quicksort / mergesort hybrid that is highly adaptive.
blitsort - Blitsort is an in-place stable adaptive rotate mergesort / quicksort.
awesome-algorithms - A curated list of awesome places to learn and/or practice algorithms.
highway - Performance-portable, length-agnostic SIMD with runtime dispatch
SHOGUN - Shōgun
awesome-theoretical-computer
awesome-theoretical-computer-science - The interdicplinary of Mathematics and Computer Science, Distinguisehed by its emphasis on mathemtical technique and rigour.
combsort.h - optimized combsort macro
go - The Go programming language
xeus-cling - Jupyter kernel for the C++ programming language