cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
shahna_khalid
Dynatrace Enthusiast
Dynatrace Enthusiast

 

Summary

This article addresses a common point of confusion for Dynatrace Operator users: how resource requests and limits on application pods are handled after the OneAgent is injected. We'll clarify the behavior and provide guidance on best practices for configuring your pods to ensure predictable and stable performance.

 

Problem

The Misconception: Overridden Pod Resources

 

When reviewing pod configurations with kubectl describe node <node-name>, it can appear that the CPU and memory requests/limits of your application containers are being overwritten by the Dynatrace Operator's init container values. Let's look at a concrete example to understand why this is a misunderstanding.

 

Troubleshooting steps

 

Category

CPU Request

Mem Request

CPU Limit

Mem Limit

Comments

Expected (Container - cs-cayley)

100m

128m

<none>

<none>

Originally defined in the pod/deployment spec

InitContainer - dynatrace-operator

30m

30Mi

100m

60Mi

Default values set by the Dynatrace Operator.

Actual (after injection)

100m

30Mi

100m

60Mi

New configuration after OneAgent injection./

The values reported in kubectl describe node.

 

At first glance, it seems the app's Memory Request and all Limit values have been replaced by the init container's settings. However, this is not the case. The Dynatrace Operator only sets resources for the init container it injects; it does not alter the resource definitions of your application containers. 

kubectl describe node <node> aggregates pod-level resources, which can mislead users into thinking app container values were changed

The apparent override is a result of how Kubernetes calculates the "effective" resource requests and limits for a pod.

 

How Kubernetes Calculates Pod Resources

Kubernetes uses a specific logic to determine the total resource requirements for a pod, which the scheduler then uses to place the pod on an appropriate node. The calculation takes into account all containers within the pod, including init containers.

The key formulas are:

  • Effective Pod Request:
    max(sum(app container requests),max(init container requests))
  • Effective Pod Limit:
    max(sum(app container limits),max(init container limits))

Let's apply these formulas to our example:

  • Effective CPU Request:
    max(100m,30m)=100m
  • Effective Memory Request:
    max(128m,30Mi)=30Mi
  • Effective CPU Limit:
    max(0,100m)=100m
  • Effective Memory Limit:
    max(0,60Mi)=60Mi

The values reported by kubectl describe node reflect this effective pod configuration, which is what the Kubernetes scheduler uses. This is why a Memory Request of 30Mi and Limits of 100m/60Mi are displayed, not because the Dynatrace Operator modified your app container's spec, but because these were the highest values in the pod's total configuration.

 

A Note on Memory Unit Formats

Another factor in the example above is the app container's Memory Request of 128m. In Kubernetes, m is the suffix for "millicores" when specifying CPU resources. For memory, the correct suffixes are Mi (mebibytes) or M (megabytes). While some Kubernetes versions may accept 128m as a valid value, they often interpret it as "millibytes," which is a negligible amount of memory (0.000128 bytes). This can lead to the init container's 30Mi request becoming the effective memory request for the pod.

 

Resolution

 The Dynatrace team recognizes that this behavior can be confusing. To provide a clearer and more predictable experience, a long-term solution is in the planning and research phase. The goal is to make resource requests and limits more configurable across all components of the Dynatrace Operator, with a shift toward providing more control to the user.

Key aspects of this holistic solution include:

  • Configurable Resources: Allowing users to configure resource requests and limits for all components of the Operator and the components it deploys.
  • Flexible Configuration: Providing options to configure these values at different levels, such as during the Operator installation (via Helm values) or within the DynaKube custom resource.
  • Clearer Defaults: Setting the default resource requests and limits to be as non-intrusive as possible to avoid unexpected behavior.
  • Improved Documentation: Creating comprehensive guides and documentation to explain recommended resource values and how to configure them for different use cases.

While this new approach is being developed, it's crucial to understand the current logic. The Dynatrace Operator is designed to respect your pod specifications and only injects resources for the OneAgent init container. By understanding the effective resource calculation logic in Kubernetes and ensuring your container specs use the correct format (e.g., 128Mi instead of 128m), you can maintain full control over your pod's resource allocation and ensure stable performance.

 

What's next

Opening a support ticket

If this article did not help, please open a support ticket, mention that this article was used and provide the following in the ticket:

What will change in the future

To enhance clarity and user control, Dynatrace is planning a comprehensive solution that will make resource requests and limits more configurable across all Operator components—through flexible configuration options at installation and within the DynaKube custom resource—while also improving defaults and documentation to reduce confusion and support diverse use cases. 

Version history
Last update:
‎16 Sep 2025 08:30 AM
Updated by: