Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
https://stackoverflow.com/questions/65551215/get-docker-cpu-...
Been a bit but I do believe that dotnet does this exact behavior. Sounds like gunicorn needs a pr to mimic, if they want to replicate this.
https://github.com/dotnet/runtime/issues/8485
We use https://github.com/uber-go/automaxprocs after we joyfully discovered that Go assumed we had the entire cluster's cpu count on any particular pod. Made for some very strange performance characteristics in scheduling goroutines.
Previously we had no limit. We observed gains in both latency and throughput by implementing Automaxprocs and decided to roll it out widely.
This aligns with what others have reported on the Go runtime issue open for this.
"When go.uber.org/automaxprocs rolled out at Uber, the effect on containerized Go services was universally positive. At least at the time, CFS imposed such heavy penalties on Go binaries exceeding their CPU allotment that properly tuning GOMAXPROCS was a significant latency and throughput improvement."
https://github.com/golang/go/issues/33803#issuecomment-14308...
> I wondered for a while if docker could make a fake /proc/cpuinfo
This exists: https://github.com/lxc/lxcfs
lxcfs is a FUSE filesystem that mocks /proc by inferring cgroup values in a way that makes other applications and libraries work without having to care about whether it runs in a container (to the best of its ability - there are definitely caveats).
One such example is that /proc/uptime should reflect the uptime of the container, not the host; additionally /proc/cpuinfo reflects the number of CPUs as a combination of cpu.max and cpuset.cpus (whichever the lower bound is).
As others also mentioned, inferring the number of CPUs could also be done using the sched_getaffinity syscall - this doesn't depend on /proc/cpuinfo, so depending on the library you're using you might be in a pickle.