<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>article Troubleshoot “Elasticsearch log queue is full” and “Elasticsearch log storing failed” in Managed Cluster in Troubleshooting</title>
    <link>https://community.dynatrace.com/t5/Troubleshooting/Troubleshoot-Elasticsearch-log-queue-is-full-and-Elasticsearch/ta-p/240404</link>
    <description>&lt;DIV class="lia-message-template-content-zone"&gt;&lt;P&gt;Warning messages “Elasticsearch log queue is full” and “Elasticsearch log storing failed” occur when the attempted ingestion of logs exceeds the limits from the managed cluster side and when the Elasticsearch mount on a given node is out of disk space, this limit is not static, and is influenced by the number of nodes, storage, and CPU cores provided to each node.&lt;/P&gt;&lt;P&gt;From CMC events, &amp;nbsp;you can observe events like:&lt;/P&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Elasticsearch log queue is full
Your Elasticsearch deployment requires scaling&lt;/LI-CODE&gt;&lt;LI-CODE lang="markup"&gt;Elasticsearch log storing failed
Check your Elasticsearch deployment state&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mutaz_0-1710848388431.png" style="width: 979px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/18376i09342432A9B54811/image-dimensions/979x95?v=v2" width="979" height="95" role="button" title="mutaz_0-1710848388431.png" alt="mutaz_0-1710848388431.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As far as the details behind those event messages, this occurs when the queue for event ingestion is full. At that point, Elasticsearch is no longer capable of event ingestion. Unless lowering the ingest limit is an option, we generally recommend scaling the nodes by either increasing the CPU cores available to each node or adding extra nodes to the cluster. This approach should give a greater capacity for processing incoming log events:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;A href="https://docs.dynatrace.com/docs/managed-cluster/installation/add-a-new-cluster-node" target="_blank" rel="noopener"&gt;Adding extra nodes to the cluster&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;Increasing the number of processors in each node.&lt;/LI&gt;&lt;LI&gt;Optional: Lower the maximum ingest of log events per minute from CMC.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Alternatively, to reduce the processing of incoming log events and contribute to less saturation of the log queue within Elasticsearch as event ingestion is attempted. &amp;nbsp;You can reconfigure the base log filter to only ingest relevant logs in the log ingest rules instead of all logs.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Go to Settings --&amp;gt; Log Monitoring --&amp;gt; Log ingest rules&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mutaz_1-1710848553835.png" style="width: 798px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/18378i24267DF449B6FB66/image-dimensions/798x306?v=v2" width="798" height="306" role="button" title="mutaz_1-1710848553835.png" alt="mutaz_1-1710848553835.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: You can check DDU consumption of log monitoring in your environment, which provides an approximate measure of the log sources sending the most logs, to help you control log ingesting rules and reduce them if possible.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Consumption --&amp;gt;&amp;nbsp; Davis data units --&amp;gt; Log&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mutaz_2-1710848781915.png" style="width: 863px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/18379i7AF3F40356711B34/image-dimensions/863x208?v=v2" width="863" height="208" role="button" title="mutaz_2-1710848781915.png" alt="mutaz_2-1710848781915.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 19 Mar 2024 11:55:09 GMT</pubDate>
    <dc:creator>mutaz</dc:creator>
    <dc:date>2024-03-19T11:55:09Z</dc:date>
    <item>
      <title>Troubleshoot “Elasticsearch log queue is full” and “Elasticsearch log storing failed” in Managed Cluster</title>
      <link>https://community.dynatrace.com/t5/Troubleshooting/Troubleshoot-Elasticsearch-log-queue-is-full-and-Elasticsearch/ta-p/240404</link>
      <description>&lt;DIV class="lia-message-template-content-zone"&gt;&lt;P&gt;Warning messages “Elasticsearch log queue is full” and “Elasticsearch log storing failed” occur when the attempted ingestion of logs exceeds the limits from the managed cluster side and when the Elasticsearch mount on a given node is out of disk space, this limit is not static, and is influenced by the number of nodes, storage, and CPU cores provided to each node.&lt;/P&gt;&lt;P&gt;From CMC events, &amp;nbsp;you can observe events like:&lt;/P&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Elasticsearch log queue is full
Your Elasticsearch deployment requires scaling&lt;/LI-CODE&gt;&lt;LI-CODE lang="markup"&gt;Elasticsearch log storing failed
Check your Elasticsearch deployment state&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mutaz_0-1710848388431.png" style="width: 979px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/18376i09342432A9B54811/image-dimensions/979x95?v=v2" width="979" height="95" role="button" title="mutaz_0-1710848388431.png" alt="mutaz_0-1710848388431.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As far as the details behind those event messages, this occurs when the queue for event ingestion is full. At that point, Elasticsearch is no longer capable of event ingestion. Unless lowering the ingest limit is an option, we generally recommend scaling the nodes by either increasing the CPU cores available to each node or adding extra nodes to the cluster. This approach should give a greater capacity for processing incoming log events:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;A href="https://docs.dynatrace.com/docs/managed-cluster/installation/add-a-new-cluster-node" target="_blank" rel="noopener"&gt;Adding extra nodes to the cluster&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;Increasing the number of processors in each node.&lt;/LI&gt;&lt;LI&gt;Optional: Lower the maximum ingest of log events per minute from CMC.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Alternatively, to reduce the processing of incoming log events and contribute to less saturation of the log queue within Elasticsearch as event ingestion is attempted. &amp;nbsp;You can reconfigure the base log filter to only ingest relevant logs in the log ingest rules instead of all logs.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Go to Settings --&amp;gt; Log Monitoring --&amp;gt; Log ingest rules&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mutaz_1-1710848553835.png" style="width: 798px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/18378i24267DF449B6FB66/image-dimensions/798x306?v=v2" width="798" height="306" role="button" title="mutaz_1-1710848553835.png" alt="mutaz_1-1710848553835.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: You can check DDU consumption of log monitoring in your environment, which provides an approximate measure of the log sources sending the most logs, to help you control log ingesting rules and reduce them if possible.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Consumption --&amp;gt;&amp;nbsp; Davis data units --&amp;gt; Log&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mutaz_2-1710848781915.png" style="width: 863px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/18379i7AF3F40356711B34/image-dimensions/863x208?v=v2" width="863" height="208" role="button" title="mutaz_2-1710848781915.png" alt="mutaz_2-1710848781915.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Mar 2024 11:55:09 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Troubleshooting/Troubleshoot-Elasticsearch-log-queue-is-full-and-Elasticsearch/ta-p/240404</guid>
      <dc:creator>mutaz</dc:creator>
      <dc:date>2024-03-19T11:55:09Z</dc:date>
    </item>
  </channel>
</rss>

