CoreFreq VS core-to-core-latency

Compare CoreFreq vs core-to-core-latency and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
CoreFreq core-to-core-latency
34 11
1,917 934
- -
9.5 1.8
8 days ago over 1 year ago
C Jupyter Notebook
GNU General Public License v3.0 only MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

CoreFreq

Posts with mentions or reviews of CoreFreq. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-08.

core-to-core-latency

Posts with mentions or reviews of core-to-core-latency. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-20.
  • Show HN: Visualize core-to-core latency on Linux in ~200 lines of C and Python
    2 projects | news.ycombinator.com | 20 Aug 2023
    The project is a port of https://github.com/nviennot/core-to-core-latency from Rust to C.
  • Compute Express Link CXL Latency How Much Is Added at HC34 (2022)
    1 project | news.ycombinator.com | 7 Jun 2023
    Very close to the point where SMT/HyperThreading might be enough, where we can just soak the latency & treat it basically like main memory. I would not be shocked to see SMT3 or SMT4 show up, once we see massively many core scale out cpus with gobs of memory. Load stores take longer, so pipelines stall, so you want to be able to keep the core busy by switching to external work.

    Also the pyramid in the diagram is somewhat sunny a picture. I'd love some better numbers to stare at. But on an 1p AMD Milan core to core latency across an 8 core CCX is low 20ns latency. Thays tiny! Accessing memory on any other CCX has to go off the CCX to the IOD and back, which is high 80s to 110ns latency. This example is from an aws c6a.metal. https://github.com/nviennot/core-to-core-latency#amd-epyc-7r...

    Intel Ice Lake (c6i.metal), being monolithic, starts way worse. Any communication has to traverse a shared ring bus and thus takes 40-65ns. https://github.com/nviennot/core-to-core-latency#intel-xeon-...

    M1 Pro is neat. An 8c chip has three "CCX" alike complexes, 2 perf of 3c each and 1 efficiency of 2c. Smart. Latency is 40ns across cluster, 150 outside cluster. https://github.com/nviennot/core-to-core-latency#apple-m1-pr...

    Doing anything off the first socket on AMD is terrible. 90-110ns across CCX on the same CCD, but any communication involving the 2ns CCX is a staggering 190ns to 210ns. https://github.com/nviennot/core-to-core-latency#dual-amd-ep... That's around what the pyramid shows as the upper end for CXL memory (170-250ns).

    Please also kindly note these figures probably don't scale linearly with core clockspeed but they probably do scale somewhat & so direct comparison is inadvised. But it's good interesting data showing some very contemporary latency situations deep in the heart of computing that CXL is unlikely to be better than.

    Using core to core is a weird proxy but illuminative of how complex & odd it is providing system connectivity is to the smaller CCX core clusters. More on point is talking about main memory latency. Anandtech has great coverage of core to core, and also crucially main memory latency too. There's a lot of nuance & config variance here (NPS0-4) but there's generally a regime where a cluster can be getting around 12ns access, but it can very quickly ramp up to 110-130ns if trying to access wide ranges of memory. It starts to look like a core to core grade speed hit. https://www.anandtech.com/show/16529/amd-epyc-milan-review/4

    Notably the IOD is basically a Northbridge controller connecting all the individual CCX clusters: key for the talking to other clusters, key to the to talking to memory, key to the exposing PCIe/CXL. If core to core is 150ns say, it's well possible CXL's additional overhead could actually be quite marginal! Maybe, or not, maybe it will be entirely on top of this hit; too early to tell probably.

    My gut feel is this pyramid is off. The peak is not as fast as they make it look today. But what exactly that means for CXL's latency is unknown.

  • Intel Linux Kernel Optimizations Show Huge Benefit For High Core Count Servers
    1 project | /r/linux | 30 Mar 2023
    Yeah but then you run into NUMA boundaries, and it's just a whole headache. Even cores within the same CPU have different speeds with communicating with each other that can make multithreading less efficient. https://github.com/nviennot/core-to-core-latency
  • Measuring CPU core-to-core latency
    1 project | /r/patient_hackernews | 18 Sep 2022
    1 project | /r/hackernews | 18 Sep 2022
  • Core-to-core latencies of the AMD EPYC Milan, 3rd gen
    4 projects | /r/Amd | 18 Sep 2022
  • Measuring core-to-core latency (in Rust)
    1 project | /r/hypeurls | 18 Sep 2022
    4 projects | news.ycombinator.com | 18 Sep 2022
  • A tool to measure core-to-core latencies in Rust
    1 project | /r/rust | 18 Sep 2022
  • Analysis of core-to-core latencies
    2 projects | /r/intel | 18 Sep 2022

What are some alternatives?

When comparing CoreFreq and core-to-core-latency you can also consider the following projects:

RyzenAdj - Adjust power management settings for Ryzen APUs

c2clat - A tool to measure CPU core to core latency

corectrl

multichase

cacule-cpu-scheduler - The CacULE CPU scheduler is based on interactivity score mechanism. The interactivity score is inspired by the ULE scheduler (FreeBSD scheduler).

ipc-bench - :racehorse: Benchmarks for Inter-Process-Communication Techniques

ryzen_smu - A Linux kernel driver that exposes access to the SMU (System Management Unit) for certain AMD Ryzen Processors. Read only mirror of https://gitlab.com/leogx9r/ryzen_smu

core-to-core-latency - Visualize core-to-core communication latency

cpuid2cpuflags - Tool to generate CPU_FLAGS_* for your CPU

pcm - Processor Counter Monitor [Moved to: https://github.com/intel/pcm]

MicroBenchX - Micro benchmarks CPU/GPU