on 16 Sep 2025 08:30 AM
This article addresses a common point of confusion for Dynatrace Operator users: how resource requests and limits on application pods are handled after the OneAgent is injected. We'll clarify the behavior and provide guidance on best practices for configuring your pods to ensure predictable and stable performance.
The Misconception: Overridden Pod Resources
When reviewing pod configurations with kubectl describe node <node-name>, it can appear that the CPU and memory requests/limits of your application containers are being overwritten by the Dynatrace Operator's init container values. Let's look at a concrete example to understand why this is a misunderstanding.
Category |
CPU Request |
Mem Request |
CPU Limit |
Mem Limit |
Comments |
Expected (Container - cs-cayley) |
100m |
128m |
<none> |
<none> |
Originally defined in the pod/deployment spec |
InitContainer - dynatrace-operator |
30m |
30Mi |
100m |
60Mi |
Default values set by the Dynatrace Operator. |
Actual (after injection) |
100m |
30Mi |
100m |
60Mi |
New configuration after OneAgent injection./ The values reported in kubectl describe node. |
At first glance, it seems the app's Memory Request and all Limit values have been replaced by the init container's settings. However, this is not the case. The Dynatrace Operator only sets resources for the init container it injects; it does not alter the resource definitions of your application containers.
kubectl describe node <node> aggregates pod-level resources, which can mislead users into thinking app container values were changed
The apparent override is a result of how Kubernetes calculates the "effective" resource requests and limits for a pod.
Kubernetes uses a specific logic to determine the total resource requirements for a pod, which the scheduler then uses to place the pod on an appropriate node. The calculation takes into account all containers within the pod, including init containers.
The key formulas are:
Let's apply these formulas to our example:
The values reported by kubectl describe node reflect this effective pod configuration, which is what the Kubernetes scheduler uses. This is why a Memory Request of 30Mi and Limits of 100m/60Mi are displayed, not because the Dynatrace Operator modified your app container's spec, but because these were the highest values in the pod's total configuration.
Another factor in the example above is the app container's Memory Request of 128m. In Kubernetes, m is the suffix for "millicores" when specifying CPU resources. For memory, the correct suffixes are Mi (mebibytes) or M (megabytes). While some Kubernetes versions may accept 128m as a valid value, they often interpret it as "millibytes," which is a negligible amount of memory (0.000128 bytes). This can lead to the init container's 30Mi request becoming the effective memory request for the pod.
The Dynatrace team recognizes that this behavior can be confusing. To provide a clearer and more predictable experience, a long-term solution is in the planning and research phase. The goal is to make resource requests and limits more configurable across all components of the Dynatrace Operator, with a shift toward providing more control to the user.
Key aspects of this holistic solution include:
While this new approach is being developed, it's crucial to understand the current logic. The Dynatrace Operator is designed to respect your pod specifications and only injects resources for the OneAgent init container. By understanding the effective resource calculation logic in Kubernetes and ensuring your container specs use the correct format (e.g., 128Mi instead of 128m), you can maintain full control over your pod's resource allocation and ensure stable performance.
If this article did not help, please open a support ticket, mention that this article was used and provide the following in the ticket:
To enhance clarity and user control, Dynatrace is planning a comprehensive solution that will make resource requests and limits more configurable across all Operator components—through flexible configuration options at installation and within the DynaKube custom resource—while also improving defaults and documentation to reduce confusion and support diverse use cases.