-
optimization-manual
Contains the source code examples described in the "IntelĀ® 64 and IA-32 Architectures Optimization Reference Manual"
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
There's two ways I vectorized linear and binary search (in practice you often want a combination, always benchmark on your real-world datasets!) - Do N binary searches simultaneously, each lane is essentially doing one bsearch. Obviously, this only works if you are doing multiple searches. - use the VPCONFLICT instruction for the linear search parts, there's even code from the Intel SDM doing it: https://github.com/intel/optimization-manual/blob/main/chap18/ex20/avx512_vector_dp.asm
"very similar topic" is an understatement. Funnily enough the "implementation to perform the best on Apple M1 after all micro-optimizations are applied" in the Conclusion is equivalent in terms of the how many actual comparisons are made as with sb_lower_bound. Out of curiosity I've benchmarked the two and orlp lower_bound seems to perform slightly worse: ~39ns average (using gcc) vs ~33ns average of sb_lower_bound (using clang -cmov). I'm comparing best runs for both, usual disclaimer of tested on my machine.