c2clat
core-to-core-latency
c2clat | core-to-core-latency | |
---|---|---|
2 | 11 | |
104 | 934 | |
- | - | |
0.0 | 1.8 | |
about 1 year ago | over 1 year ago | |
C++ | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
c2clat
-
Measuring core-to-core latency (in Rust)
I have something similar but in C++: https://github.com/rigtorp/c2clat
-
The Intel i9-12900K, a Z690, and some DDR5 RAM have just arrived, what tests/ benchmarks would you like to see?
Inter-core latency! https://github.com/rigtorp/c2clat
core-to-core-latency
-
Show HN: Visualize core-to-core latency on Linux in ~200 lines of C and Python
The project is a port of https://github.com/nviennot/core-to-core-latency from Rust to C.
-
Compute Express Link CXL Latency How Much Is Added at HC34 (2022)
Very close to the point where SMT/HyperThreading might be enough, where we can just soak the latency & treat it basically like main memory. I would not be shocked to see SMT3 or SMT4 show up, once we see massively many core scale out cpus with gobs of memory. Load stores take longer, so pipelines stall, so you want to be able to keep the core busy by switching to external work.
Also the pyramid in the diagram is somewhat sunny a picture. I'd love some better numbers to stare at. But on an 1p AMD Milan core to core latency across an 8 core CCX is low 20ns latency. Thays tiny! Accessing memory on any other CCX has to go off the CCX to the IOD and back, which is high 80s to 110ns latency. This example is from an aws c6a.metal. https://github.com/nviennot/core-to-core-latency#amd-epyc-7r...
Intel Ice Lake (c6i.metal), being monolithic, starts way worse. Any communication has to traverse a shared ring bus and thus takes 40-65ns. https://github.com/nviennot/core-to-core-latency#intel-xeon-...
M1 Pro is neat. An 8c chip has three "CCX" alike complexes, 2 perf of 3c each and 1 efficiency of 2c. Smart. Latency is 40ns across cluster, 150 outside cluster. https://github.com/nviennot/core-to-core-latency#apple-m1-pr...
Doing anything off the first socket on AMD is terrible. 90-110ns across CCX on the same CCD, but any communication involving the 2ns CCX is a staggering 190ns to 210ns. https://github.com/nviennot/core-to-core-latency#dual-amd-ep... That's around what the pyramid shows as the upper end for CXL memory (170-250ns).
Please also kindly note these figures probably don't scale linearly with core clockspeed but they probably do scale somewhat & so direct comparison is inadvised. But it's good interesting data showing some very contemporary latency situations deep in the heart of computing that CXL is unlikely to be better than.
Using core to core is a weird proxy but illuminative of how complex & odd it is providing system connectivity is to the smaller CCX core clusters. More on point is talking about main memory latency. Anandtech has great coverage of core to core, and also crucially main memory latency too. There's a lot of nuance & config variance here (NPS0-4) but there's generally a regime where a cluster can be getting around 12ns access, but it can very quickly ramp up to 110-130ns if trying to access wide ranges of memory. It starts to look like a core to core grade speed hit. https://www.anandtech.com/show/16529/amd-epyc-milan-review/4
Notably the IOD is basically a Northbridge controller connecting all the individual CCX clusters: key for the talking to other clusters, key to the to talking to memory, key to the exposing PCIe/CXL. If core to core is 150ns say, it's well possible CXL's additional overhead could actually be quite marginal! Maybe, or not, maybe it will be entirely on top of this hit; too early to tell probably.
My gut feel is this pyramid is off. The peak is not as fast as they make it look today. But what exactly that means for CXL's latency is unknown.
-
Intel Linux Kernel Optimizations Show Huge Benefit For High Core Count Servers
Yeah but then you run into NUMA boundaries, and it's just a whole headache. Even cores within the same CPU have different speeds with communicating with each other that can make multithreading less efficient. https://github.com/nviennot/core-to-core-latency
- Measuring CPU core-to-core latency
- Core-to-core latencies of the AMD EPYC Milan, 3rd gen
- Measuring core-to-core latency (in Rust)
- A tool to measure core-to-core latencies in Rust
- Analysis of core-to-core latencies
What are some alternatives?
pcm - Processor Counter Monitor [Moved to: https://github.com/intel/pcm]
multichase
pcm - IntelĀ® Performance Counter Monitor (IntelĀ® PCM)
ipc-bench - :racehorse: Benchmarks for Inter-Process-Communication Techniques
thor-os - Simple operating system in C++, written from scratch
core-to-core-latency - Visualize core-to-core communication latency
ryzen_smu - A Linux kernel driver that exposes access to the SMU (System Management Unit) for certain AMD Ryzen Processors. Read only mirror of https://gitlab.com/leogx9r/ryzen_smu
USB-x360-N64Controller - N64 to x360 controller conversion using Maple Mini (STM32F1)
MicroBenchX - Micro benchmarks CPU/GPU
cva6 - The CORE-V CVA6 is an Application class 6-stage RISC-V CPU capable of booting Linux
CoreFreq - CoreFreq : CPU monitoring and tuning software designed for 64-bit processors.