When viewing Kubernetes workload metrics in most observability tools, you might sometimes notice that CPU throttling, CPU limits, and CPU requests appear blank or show no data for certain workloads — while other workloads display them fine.
This behavior is generally expected and can be explained by how these metrics are collected in Kubernetes environments.
1. Pod activity and metric reporting
- CPU throttling data comes from Kubernetes cgroup metrics (container_cpu_cfs_throttled_periods_total vs. container_cpu_cfs_periods_total) exposed via kubelet/cAdvisor.
- If a pod is idle (no incoming requests, no CPU usage), the throttled periods remain 0. In such cases, CPU throttling charts will be empty because there’s nothing to report.
- Many observability agents optimize collection by marking inactive pods and reducing metric polling until activity resumes.
2. CPU limits and CPU requests not defined
- These values are pulled from the Kubernetes API based on the container spec.
- If resources.requests.cpu and resources.limits.cpu are not defined in the pod/deployment YAML, the fields will remain blank in your observability tool.
3. It’s not always a connectivity issue
If an observability agent was unable to connect to the node or kubelet, you’d typically see an explicit alert for that in the platform.
If there’s no such alert, missing CPU throttling data is usually due to lack of CPU activity or undefined resource requests/limits — not a failed connection.
4. How to verify
Inspect the pod spec:
kubectl get pod <name> -o yaml
Check for resources.limits.cpu and resources.requests.cpu.
- Send synthetic load to the pod and re-check CPU throttling metrics.
- Ensure kubelet/cAdvisor endpoints are reachable from your observability agents.
Takeaway:
If CPU throttling shows no data and CPU limits/requests are blank:
- Idle pod → no throttling data to display.
- No limits/requests defined → blank fields in your observability UI.
- This is expected behavior across most Kubernetes observability tools.
Love more, hate less; Technology for all, together we grow.