<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>article Navigating Resource Management for Dynatrace OneAgent on Kubernetes in Troubleshooting</title>
    <link>https://community.dynatrace.com/t5/Troubleshooting/Navigating-Resource-Management-for-Dynatrace-OneAgent-on/ta-p/283453</link>
    <description>&lt;P&gt;&lt;LI-TOC indent="15" liststyle="disc" maxheadinglevel="2"&gt;&lt;/LI-TOC&gt;&lt;/P&gt;
&lt;DIV class="lia-message-template-content-zone"&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;Summary&lt;/H1&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;This article addresses a common point of confusion for Dynatrace Operator users: how resource requests and limits on application pods are handled after the OneAgent is injected. We'll clarify the behavior and provide guidance on best practices for configuring your pods to ensure predictable and stable performance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Problem&lt;/H1&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;The Misconception: Overridden Pod Resources&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;When reviewing pod configurations with &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;kubectl describe node &amp;lt;node-name&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="s1"&gt;, it can appear that the CPU and memory requests/limits of your application containers are being overwritten by the Dynatrace Operator's init container values. Let's look at a concrete example to understand why this is a misunderstanding.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Troubleshooting steps&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="264"&gt;
&lt;P&gt;&lt;STRONG&gt;Category&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;&lt;STRONG&gt;CPU Request&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Mem Request&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;&lt;STRONG&gt;CPU Limit&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Mem Limit&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="331"&gt;
&lt;P&gt;&lt;STRONG&gt;Comments&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="264"&gt;
&lt;P&gt;Expected (Container - cs-cayley)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;100m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;128m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;&amp;lt;none&amp;gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&amp;lt;none&amp;gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="331"&gt;
&lt;P&gt;Originally defined in the pod/deployment spec&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="264"&gt;
&lt;P&gt;InitContainer - dynatrace-operator&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;30m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;30Mi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;100m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;60Mi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="331"&gt;
&lt;P class="p2"&gt;&lt;SPAN class="s1"&gt;Default values set by the Dynatrace Operator.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="264"&gt;
&lt;P&gt;Actual (after injection)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;100m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;30Mi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;100m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;60Mi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="331"&gt;
&lt;P&gt;New configuration after OneAgent injection./&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;The values reported in &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;kubectl describe node&lt;/SPAN&gt;&lt;SPAN class="s1"&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;At first glance, it seems the app's &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Memory Request&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; and all &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Limit&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; values have been replaced by the init container's settings. However, this is not the case. The Dynatrace Operator only sets resources for the init container it injects; it does not alter the resource definitions of your application containers.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;kubectl describe node &amp;lt;node&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;&amp;nbsp;aggregates pod-level resources, which can mislead users into thinking app container values were changed&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;The apparent override is a result of how Kubernetes calculates the "effective" resource requests and limits for a pod.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;How Kubernetes Calculates Pod Resources&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;Kubernetes uses a specific logic to determine the total resource requirements for a pod, which the scheduler then uses to place the pod on an appropriate node. The calculation takes into account all containers within the pod, including init containers.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;The key formulas are:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL class="ul1"&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective Pod Request:&lt;/STRONG&gt; &lt;BR /&gt;max(sum(app&amp;nbsp;container&amp;nbsp;requests),max(init&amp;nbsp;container&amp;nbsp;requests))&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective Pod Limit:&lt;/STRONG&gt; &lt;BR /&gt;max(sum(app&amp;nbsp;container&amp;nbsp;limits),max(init&amp;nbsp;container&amp;nbsp;limits))&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;Let's apply these formulas to our example:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL class="ul1"&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective CPU Request:&lt;/STRONG&gt; &lt;BR /&gt;max(100m,30m)=100m&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective Memory Request:&lt;/STRONG&gt; &lt;BR /&gt;max(128m,30Mi)=30Mi&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective CPU Limit:&lt;/STRONG&gt; &lt;BR /&gt;max(0,100m)=100m&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective Memory Limit:&lt;/STRONG&gt; &lt;BR /&gt;max(0,60Mi)=60Mi&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;The values reported by &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;kubectl describe node&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; reflect this effective pod configuration, which is what the Kubernetes scheduler uses. This is why a &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Memory Request&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; of &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;30Mi&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; and &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Limits&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; of &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;100m/60Mi&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; are displayed, not because the Dynatrace Operator modified your app container's spec, but because these were the highest values in the pod's total configuration.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;A Note on Memory Unit Formats&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;Another factor in the example above is the app container's &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Memory Request&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; of &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;128m&lt;/SPAN&gt;&lt;SPAN class="s1"&gt;. In Kubernetes, &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;m&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; is the suffix for "millicores" when specifying CPU resources. For memory, the correct suffixes are Mi (mebibytes) or M (megabytes). While some Kubernetes versions may accept &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;128m&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; as a valid value, they often interpret it as "millibytes," which is a negligible amount of memory (0.000128 bytes). This can lead to the init container's &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;30Mi&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; request becoming the effective memory request for the pod.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;Resolution&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;SPAN class="s1"&gt;The Dynatrace team recognizes that this behavior can be confusing. To provide a clearer and more predictable experience, a long-term solution is in the planning and research phase. The goal is to make resource requests and limits more configurable across all components of the Dynatrace Operator, with a shift toward providing more control to the user.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;Key aspects of this holistic solution include:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL class="ul1"&gt;
&lt;LI class="li1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Configurable Resources:&lt;/STRONG&gt; Allowing users to configure resource requests and limits for all components of the Operator and the components it deploys.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Flexible Configuration:&lt;/STRONG&gt; Providing options to configure these values at different levels, such as during the Operator installation (via Helm values) or within the DynaKube custom resource.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Clearer Defaults:&lt;/STRONG&gt; Setting the default resource requests and limits to be as non-intrusive as possible to avoid unexpected behavior.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Improved Documentation:&lt;/STRONG&gt; Creating comprehensive guides and documentation to explain recommended resource values and how to configure them for different use cases.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;While this new approach is being developed, it's crucial to understand the current logic. The Dynatrace Operator is designed to respect your pod specifications and only injects resources for the OneAgent init container. By understanding the effective resource calculation logic in Kubernetes and ensuring your container specs use the correct format (e.g., &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;128Mi&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; instead of &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;128m&lt;/SPAN&gt;&lt;SPAN class="s1"&gt;), you can maintain full control over your pod's resource allocation and ensure stable performance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;What's next&lt;/H1&gt;
&lt;H2&gt;&lt;STRONG&gt;Opening a support ticket&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;If this article did not help, please open a support ticket, mention that this article was used and provide the following in the ticket:&lt;/P&gt;
&lt;DIV class="p-client_container"&gt;
&lt;DIV class="p-ia4_client_container"&gt;
&lt;DIV class="p-ia4_client p-ia4_client--with-search-in-top-nav p-ia4_client--workspace-switcher-rail-visibletest p-ia4_client--sidebar-wide p-ia4_client--narrow-feature-on"&gt;
&lt;DIV class="p-client_workspace_wrapper" role="tabpanel" aria-label="Dynatrace"&gt;
&lt;DIV class="p-client_workspace" role="tabpanel" aria-label="DMs"&gt;
&lt;DIV class="p-client_workspace__layout"&gt;
&lt;DIV class="active-managed-focus-container" role="none"&gt;
&lt;DIV class="p-view_contents p-view_contents--primary" tabindex="-1" role="dialog" aria-label="Conversation with Anton Konikov"&gt;
&lt;DIV class="tabbed_channel__Abx5r"&gt;
&lt;DIV class="tabbed_channel__Abx5r"&gt;
&lt;DIV class="channel_tab_panel__zJ5Bt c-tabs__tab_panel c-tabs__tab_panel--active c-tabs__tab_panel--full_height" role="none" data-qa="tabs_content_container"&gt;
&lt;DIV class="p-file_drag_drop__container"&gt;
&lt;DIV class="p-workspace__primary_view_body"&gt;
&lt;DIV class="p-message_pane p-message_pane--classic-nav p-message_pane--scrollbar-float-adjustment p-message_pane--with-bookmarks-bar" data-qa="message_pane"&gt;
&lt;DIV role="presentation"&gt;
&lt;DIV class="c-virtual_list c-virtual_list--scrollbar c-message_list c-message_list--floating c-message_list--dark c-scrollbar c-scrollbar--fade" role="presentation"&gt;
&lt;DIV class="c-scrollbar__hider" role="presentation" data-qa="slack_kit_scrollbar"&gt;
&lt;DIV class="c-scrollbar__child" role="presentation"&gt;
&lt;DIV class="c-virtual_list__scroll_container" tabindex="-1" role="list" data-qa="slack_kit_list" aria-label="Anton Konikov (direct message, active)"&gt;
&lt;DIV id="1734101723.604509" class="c-virtual_list__item" tabindex="0" role="listitem" aria-setsize="-1" data-qa="virtual-list-item" data-item-key="1734101723.604509"&gt;
&lt;DIV class="c-message_kit__background p-message_pane_message__message c-message_kit__message p-message_pane_message__message--last" role="presentation" data-qa="message_container" data-qa-unprocessed="false" data-qa-placeholder="false"&gt;
&lt;DIV class="c-message_kit__hover" role="document" aria-roledescription="message" data-qa-hover="true"&gt;
&lt;DIV class="c-message_kit__actions c-message_kit__actions--above"&gt;
&lt;DIV class="c-message_kit__gutter"&gt;
&lt;DIV class="c-message_kit__gutter__right" role="presentation" data-qa="message_content"&gt;
&lt;DIV class="c-message_kit__blocks c-message_kit__blocks--rich_text"&gt;
&lt;DIV class="c-message__message_blocks c-message__message_blocks--rich_text" data-qa="message-text"&gt;
&lt;DIV class="p-block_kit_renderer" data-qa="block-kit-renderer"&gt;
&lt;DIV class="p-block_kit_renderer__block_wrapper p-block_kit_renderer__block_wrapper--first"&gt;
&lt;DIV class="p-rich_text_block" dir="auto"&gt;
&lt;UL class="p-rich_text_list p-rich_text_list__bullet p-rich_text_list--nested" data-stringify-type="unordered-list" data-list-tree="true" data-indent="0" data-border="1" data-border-radius-top-cap="0"&gt;
&lt;LI&gt;describe output of application pod (affected)&lt;/LI&gt;
&lt;LI data-stringify-indent="0" data-stringify-border="1"&gt;screenshot of&amp;nbsp;what limit/requests are set&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;STRONG&gt;What will change in the future&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;To enhance clarity and user control, Dynatrace is planning a comprehensive solution that will make resource requests and limits more configurable across all Operator components—through flexible configuration options at installation and within the DynaKube custom resource—while also improving defaults and documentation to reduce confusion and support diverse use cases.&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;</description>
    <pubDate>Tue, 16 Sep 2025 07:30:38 GMT</pubDate>
    <dc:creator>shahna_khalid</dc:creator>
    <dc:date>2025-09-16T07:30:38Z</dc:date>
    <item>
      <title>Navigating Resource Management for Dynatrace OneAgent on Kubernetes</title>
      <link>https://community.dynatrace.com/t5/Troubleshooting/Navigating-Resource-Management-for-Dynatrace-OneAgent-on/ta-p/283453</link>
      <description>&lt;P&gt;&lt;LI-TOC indent="15" liststyle="disc" maxheadinglevel="2"&gt;&lt;/LI-TOC&gt;&lt;/P&gt;
&lt;DIV class="lia-message-template-content-zone"&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;Summary&lt;/H1&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;This article addresses a common point of confusion for Dynatrace Operator users: how resource requests and limits on application pods are handled after the OneAgent is injected. We'll clarify the behavior and provide guidance on best practices for configuring your pods to ensure predictable and stable performance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Problem&lt;/H1&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;The Misconception: Overridden Pod Resources&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;When reviewing pod configurations with &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;kubectl describe node &amp;lt;node-name&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="s1"&gt;, it can appear that the CPU and memory requests/limits of your application containers are being overwritten by the Dynatrace Operator's init container values. Let's look at a concrete example to understand why this is a misunderstanding.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Troubleshooting steps&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="264"&gt;
&lt;P&gt;&lt;STRONG&gt;Category&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;&lt;STRONG&gt;CPU Request&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Mem Request&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;&lt;STRONG&gt;CPU Limit&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Mem Limit&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="331"&gt;
&lt;P&gt;&lt;STRONG&gt;Comments&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="264"&gt;
&lt;P&gt;Expected (Container - cs-cayley)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;100m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;128m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;&amp;lt;none&amp;gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&amp;lt;none&amp;gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="331"&gt;
&lt;P&gt;Originally defined in the pod/deployment spec&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="264"&gt;
&lt;P&gt;InitContainer - dynatrace-operator&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;30m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;30Mi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;100m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;60Mi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="331"&gt;
&lt;P class="p2"&gt;&lt;SPAN class="s1"&gt;Default values set by the Dynatrace Operator.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="264"&gt;
&lt;P&gt;Actual (after injection)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;100m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;30Mi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="113"&gt;
&lt;P&gt;100m&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="123"&gt;
&lt;P&gt;60Mi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="331"&gt;
&lt;P&gt;New configuration after OneAgent injection./&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;The values reported in &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;kubectl describe node&lt;/SPAN&gt;&lt;SPAN class="s1"&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;At first glance, it seems the app's &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Memory Request&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; and all &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Limit&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; values have been replaced by the init container's settings. However, this is not the case. The Dynatrace Operator only sets resources for the init container it injects; it does not alter the resource definitions of your application containers.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;kubectl describe node &amp;lt;node&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;&amp;nbsp;aggregates pod-level resources, which can mislead users into thinking app container values were changed&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;The apparent override is a result of how Kubernetes calculates the "effective" resource requests and limits for a pod.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;How Kubernetes Calculates Pod Resources&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;Kubernetes uses a specific logic to determine the total resource requirements for a pod, which the scheduler then uses to place the pod on an appropriate node. The calculation takes into account all containers within the pod, including init containers.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;The key formulas are:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL class="ul1"&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective Pod Request:&lt;/STRONG&gt; &lt;BR /&gt;max(sum(app&amp;nbsp;container&amp;nbsp;requests),max(init&amp;nbsp;container&amp;nbsp;requests))&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective Pod Limit:&lt;/STRONG&gt; &lt;BR /&gt;max(sum(app&amp;nbsp;container&amp;nbsp;limits),max(init&amp;nbsp;container&amp;nbsp;limits))&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;Let's apply these formulas to our example:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL class="ul1"&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective CPU Request:&lt;/STRONG&gt; &lt;BR /&gt;max(100m,30m)=100m&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective Memory Request:&lt;/STRONG&gt; &lt;BR /&gt;max(128m,30Mi)=30Mi&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective CPU Limit:&lt;/STRONG&gt; &lt;BR /&gt;max(0,100m)=100m&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li3"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Effective Memory Limit:&lt;/STRONG&gt; &lt;BR /&gt;max(0,60Mi)=60Mi&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;The values reported by &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;kubectl describe node&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; reflect this effective pod configuration, which is what the Kubernetes scheduler uses. This is why a &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Memory Request&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; of &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;30Mi&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; and &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Limits&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; of &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;100m/60Mi&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; are displayed, not because the Dynatrace Operator modified your app container's spec, but because these were the highest values in the pod's total configuration.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;A Note on Memory Unit Formats&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="p3"&gt;&lt;SPAN class="s1"&gt;Another factor in the example above is the app container's &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Memory Request&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; of &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;128m&lt;/SPAN&gt;&lt;SPAN class="s1"&gt;. In Kubernetes, &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;m&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; is the suffix for "millicores" when specifying CPU resources. For memory, the correct suffixes are Mi (mebibytes) or M (megabytes). While some Kubernetes versions may accept &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;128m&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; as a valid value, they often interpret it as "millibytes," which is a negligible amount of memory (0.000128 bytes). This can lead to the init container's &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;30Mi&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; request becoming the effective memory request for the pod.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;Resolution&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;SPAN class="s1"&gt;The Dynatrace team recognizes that this behavior can be confusing. To provide a clearer and more predictable experience, a long-term solution is in the planning and research phase. The goal is to make resource requests and limits more configurable across all components of the Dynatrace Operator, with a shift toward providing more control to the user.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;Key aspects of this holistic solution include:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL class="ul1"&gt;
&lt;LI class="li1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Configurable Resources:&lt;/STRONG&gt; Allowing users to configure resource requests and limits for all components of the Operator and the components it deploys.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Flexible Configuration:&lt;/STRONG&gt; Providing options to configure these values at different levels, such as during the Operator installation (via Helm values) or within the DynaKube custom resource.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Clearer Defaults:&lt;/STRONG&gt; Setting the default resource requests and limits to be as non-intrusive as possible to avoid unexpected behavior.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="li1"&gt;&lt;SPAN class="s1"&gt;&lt;STRONG&gt;Improved Documentation:&lt;/STRONG&gt; Creating comprehensive guides and documentation to explain recommended resource values and how to configure them for different use cases.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;While this new approach is being developed, it's crucial to understand the current logic. The Dynatrace Operator is designed to respect your pod specifications and only injects resources for the OneAgent init container. By understanding the effective resource calculation logic in Kubernetes and ensuring your container specs use the correct format (e.g., &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;128Mi&lt;/SPAN&gt;&lt;SPAN class="s1"&gt; instead of &lt;/SPAN&gt;&lt;SPAN class="s2"&gt;128m&lt;/SPAN&gt;&lt;SPAN class="s1"&gt;), you can maintain full control over your pod's resource allocation and ensure stable performance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;What's next&lt;/H1&gt;
&lt;H2&gt;&lt;STRONG&gt;Opening a support ticket&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;If this article did not help, please open a support ticket, mention that this article was used and provide the following in the ticket:&lt;/P&gt;
&lt;DIV class="p-client_container"&gt;
&lt;DIV class="p-ia4_client_container"&gt;
&lt;DIV class="p-ia4_client p-ia4_client--with-search-in-top-nav p-ia4_client--workspace-switcher-rail-visibletest p-ia4_client--sidebar-wide p-ia4_client--narrow-feature-on"&gt;
&lt;DIV class="p-client_workspace_wrapper" role="tabpanel" aria-label="Dynatrace"&gt;
&lt;DIV class="p-client_workspace" role="tabpanel" aria-label="DMs"&gt;
&lt;DIV class="p-client_workspace__layout"&gt;
&lt;DIV class="active-managed-focus-container" role="none"&gt;
&lt;DIV class="p-view_contents p-view_contents--primary" tabindex="-1" role="dialog" aria-label="Conversation with Anton Konikov"&gt;
&lt;DIV class="tabbed_channel__Abx5r"&gt;
&lt;DIV class="tabbed_channel__Abx5r"&gt;
&lt;DIV class="channel_tab_panel__zJ5Bt c-tabs__tab_panel c-tabs__tab_panel--active c-tabs__tab_panel--full_height" role="none" data-qa="tabs_content_container"&gt;
&lt;DIV class="p-file_drag_drop__container"&gt;
&lt;DIV class="p-workspace__primary_view_body"&gt;
&lt;DIV class="p-message_pane p-message_pane--classic-nav p-message_pane--scrollbar-float-adjustment p-message_pane--with-bookmarks-bar" data-qa="message_pane"&gt;
&lt;DIV role="presentation"&gt;
&lt;DIV class="c-virtual_list c-virtual_list--scrollbar c-message_list c-message_list--floating c-message_list--dark c-scrollbar c-scrollbar--fade" role="presentation"&gt;
&lt;DIV class="c-scrollbar__hider" role="presentation" data-qa="slack_kit_scrollbar"&gt;
&lt;DIV class="c-scrollbar__child" role="presentation"&gt;
&lt;DIV class="c-virtual_list__scroll_container" tabindex="-1" role="list" data-qa="slack_kit_list" aria-label="Anton Konikov (direct message, active)"&gt;
&lt;DIV id="1734101723.604509" class="c-virtual_list__item" tabindex="0" role="listitem" aria-setsize="-1" data-qa="virtual-list-item" data-item-key="1734101723.604509"&gt;
&lt;DIV class="c-message_kit__background p-message_pane_message__message c-message_kit__message p-message_pane_message__message--last" role="presentation" data-qa="message_container" data-qa-unprocessed="false" data-qa-placeholder="false"&gt;
&lt;DIV class="c-message_kit__hover" role="document" aria-roledescription="message" data-qa-hover="true"&gt;
&lt;DIV class="c-message_kit__actions c-message_kit__actions--above"&gt;
&lt;DIV class="c-message_kit__gutter"&gt;
&lt;DIV class="c-message_kit__gutter__right" role="presentation" data-qa="message_content"&gt;
&lt;DIV class="c-message_kit__blocks c-message_kit__blocks--rich_text"&gt;
&lt;DIV class="c-message__message_blocks c-message__message_blocks--rich_text" data-qa="message-text"&gt;
&lt;DIV class="p-block_kit_renderer" data-qa="block-kit-renderer"&gt;
&lt;DIV class="p-block_kit_renderer__block_wrapper p-block_kit_renderer__block_wrapper--first"&gt;
&lt;DIV class="p-rich_text_block" dir="auto"&gt;
&lt;UL class="p-rich_text_list p-rich_text_list__bullet p-rich_text_list--nested" data-stringify-type="unordered-list" data-list-tree="true" data-indent="0" data-border="1" data-border-radius-top-cap="0"&gt;
&lt;LI&gt;describe output of application pod (affected)&lt;/LI&gt;
&lt;LI data-stringify-indent="0" data-stringify-border="1"&gt;screenshot of&amp;nbsp;what limit/requests are set&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;STRONG&gt;What will change in the future&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;To enhance clarity and user control, Dynatrace is planning a comprehensive solution that will make resource requests and limits more configurable across all Operator components—through flexible configuration options at installation and within the DynaKube custom resource—while also improving defaults and documentation to reduce confusion and support diverse use cases.&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;</description>
      <pubDate>Tue, 16 Sep 2025 07:30:38 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Troubleshooting/Navigating-Resource-Management-for-Dynatrace-OneAgent-on/ta-p/283453</guid>
      <dc:creator>shahna_khalid</dc:creator>
      <dc:date>2025-09-16T07:30:38Z</dc:date>
    </item>
  </channel>
</rss>

