cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

How do Kubernetes CPU limits apply to a pod in Dynatrace with multiple containers running within it

_danny_
Visitor

Removing CPU limits in k8s is a common practice. For our applications, we have removed CPU limits, so our app resources look like this: 

 

  containers:
    - resources:
        limits:
          memory: 250Mi
        requests:
          cpu: 200m
          memory: 500Mi

 

But when you go to inject the oneagent on startup, the oneagent init container appears and it has a CPU limit by default: 

 

  initContainers:
    - resources:
        limits:
          cpu: 100m
          memory: 60Mi
        requests:
          cpu: 30m
          memory: 30Mi

 

 

Whats interesting is, it appears as if Dynatrace is taking the CPU limit from the init container, and applying it to the pod CPU limit, so I'm constantly being alerted on "CPU usage is close to limit" because my app is using more than 100m of CPU per pod. 

 

1) Does Dynatrace roll up it's container resource allocation to pod resource allocation? Meaning if the app container CPU limit is missing (on purpose), it just fills in the blanks with the init container CPU limit?

2) Even though the init container is short lived, <1min, why would Dynatrace roll up the CPU limit from the init container to the pod, even if that container only runs for <1min?

I guess.. is this a config issue with the way we do k8s at our company or is this potentially fixed in the new k8s app? It feels like Dynatrace isn't getting granular enough with how it alerts on containers and is rolling it up to the pod level

4 REPLIES 4

Dant3
Pro

I have found this in initial tests also when no CPU is set to avoid throttling. I also think that is a common practice or even better a "best practice" to not set up CPU limits.

I do think that the init limits of cloud-native should not be added to the equation of limits and Dynatrace itself should filter out that information. As you said is a short-lived container. 

Services Solution Engineer @PowerCloud - Observability/CloudOps Certified / Former SE @Dynatrace.

Totally agree with both statements. There should be a setting to remove the one agent init container from alerting/DavisAI on the Dynatrace end given the short lived nature and that it belongs to Dynatrace. It would still be good to see the init container when it's alive on start-up, just more muted. The other option would be for Dynatrace to not inherit init container values for workloads/pods.

 

Did more testing with multiple configs, for example, 1 init container and 1 app container, 0 init containers and 1 app container, 1 init container and 2 app containers, 1 init container and 2 app containers, etc. with different CPU limits and below is what I found to be true:

If the app container(s) do not have a CPU limit set, Dynatrace will apply the init container CPU limit to the workload/pod.

If the app container(s) do have a CPU limit set, Dynatrace will take the sum of the CPU limit from the app container(s) and apply it as the workload/pod CPU limit.

 

This seems like a flaw (or maybe there is a setting/config we're missing?) in how Dynatrace is identifying the correct CPU limit for given workload/pod.

The only thing you can change is the definition of resources of the init container, but as an init, it should always have a resource definition. Still, no additional end user-facing configuration can be changed at the UI to avoid using that data of the init container... 

For me is a design flaw. 

Services Solution Engineer @PowerCloud - Observability/CloudOps Certified / Former SE @Dynatrace.

PacoPorro
Dynatrace Leader
Dynatrace Leader

Did you create a product idea for this?

Featured Posts