<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Metric dimension limits in Log Analytics</title>
    <link>https://community.dynatrace.com/t5/Log-Analytics/Metric-dimension-limits/m-p/295717#M1536</link>
    <description>&lt;P&gt;I ended up opening a support case for this. Here's what we found:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Metrics must be extracted in OpenPipeline to benefit from unlimited dimensions on Grail&lt;/LI&gt;&lt;LI&gt;My query above was returning a count higher than 1 million because I was including system dt.*&amp;nbsp; dimensions in the "by:" statement, which do not count toward the classic 1 million tuple limit. Unfortunately, because metric queries stop scanning after a certain number of datapoints, I wasn't able to verify the tuple count capping at 1 million using that method.&lt;/LI&gt;&lt;LI&gt;Use the&amp;nbsp;dsfm:server.metrics.metric_dimensions_usage as a reliable way to detect metrics reaching the dimension limits.&amp;nbsp;&lt;/LI&gt;&lt;/UL&gt;</description>
    <pubDate>Thu, 05 Mar 2026 19:37:22 GMT</pubDate>
    <dc:creator>ZackE</dc:creator>
    <dc:date>2026-03-05T19:37:22Z</dc:date>
    <item>
      <title>Metric dimension limits</title>
      <link>https://community.dynatrace.com/t5/Log-Analytics/Metric-dimension-limits/m-p/294885#M1524</link>
      <description>&lt;P&gt;We have a log metric configured in classic settings with a very high number of dimensions (tuples). The "dt.sfm.server.metrics.rejections" metric gives the following error for the log metric we configured:&lt;/P&gt;&lt;P&gt;"Couldn't save ingested data. This metric key has reached the maximum number of tuples for a single metric for the last 30 days."&lt;/P&gt;&lt;P&gt;Do I understand correctly that this means we are only ingesting data for tuples (which is a&amp;nbsp;unique set of dimensions for a metric, correct?) that were already ingested before we hit the limit?&lt;BR /&gt;According to the &lt;A href="https://docs.dynatrace.com/docs/shortlink/metrics-limits#metrics-powered-by-grail-compared-to-metrics-classic" target="_blank"&gt;documentation&lt;/A&gt;, the limit for dimension tuples is 1 million for classic metrics and unlimited (excluding highly volatile dimensions) on Grail. I didn't think the classic metric limit applied since we do use Grail, but is that incorrect?&lt;BR /&gt;Does the log metric have to be configured in OpenPipeline in order to benefit from the unlimited dimensions on Grail? And while there are a large number of tuples for our log metric, I don't believe this to be highly volatile as they describe.&lt;/P&gt;&lt;P&gt;I've also created a DQL query to attempt to count the number of unique tuples within the timeframe of the query and it's showing a cumulative count of around 1.7 million by the end of the last 30 days, which is higher than the 1 million classic limit. So why are we getting that error message? Is data really getting dropped?&lt;/P&gt;&lt;P&gt;Here is the DQL query I mentioned for reference:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;timeseries {count = sum(&amp;lt;log metric key&amp;gt;)}, by:{&amp;lt;split by all available dimensions&amp;gt;},interval: 6h

// get the index of first occurance of tuple
| fieldsAdd first_index = arrayIndexOf(count,arrayFirst(count))

// only count the first occrance
| fields timeframe, count = if(iIndex() == first_index, count[]/count[]), interval

// count of all new tuples over time
| summarize {timeframe = takeFirst(timeframe), dimension_count = sum(count[]), interval = takeFirst(interval)}

// add cumulative count over time and total count
| fieldsAdd cumulative_dimension_count = arrayCumulativeSum(dimension_count), total = arraySum(dimension_count)&lt;/LI-CODE&gt;</description>
      <pubDate>Tue, 17 Feb 2026 22:41:42 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Log-Analytics/Metric-dimension-limits/m-p/294885#M1524</guid>
      <dc:creator>ZackE</dc:creator>
      <dc:date>2026-02-17T22:41:42Z</dc:date>
    </item>
    <item>
      <title>Re: Metric dimension limits</title>
      <link>https://community.dynatrace.com/t5/Log-Analytics/Metric-dimension-limits/m-p/295717#M1536</link>
      <description>&lt;P&gt;I ended up opening a support case for this. Here's what we found:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Metrics must be extracted in OpenPipeline to benefit from unlimited dimensions on Grail&lt;/LI&gt;&lt;LI&gt;My query above was returning a count higher than 1 million because I was including system dt.*&amp;nbsp; dimensions in the "by:" statement, which do not count toward the classic 1 million tuple limit. Unfortunately, because metric queries stop scanning after a certain number of datapoints, I wasn't able to verify the tuple count capping at 1 million using that method.&lt;/LI&gt;&lt;LI&gt;Use the&amp;nbsp;dsfm:server.metrics.metric_dimensions_usage as a reliable way to detect metrics reaching the dimension limits.&amp;nbsp;&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Thu, 05 Mar 2026 19:37:22 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Log-Analytics/Metric-dimension-limits/m-p/295717#M1536</guid>
      <dc:creator>ZackE</dc:creator>
      <dc:date>2026-03-05T19:37:22Z</dc:date>
    </item>
  </channel>
</rss>

