Log Analytics
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Metric dimension limits

ZackE
Participant

We have a log metric configured in classic settings with a very high number of dimensions (tuples). The "dt.sfm.server.metrics.rejections" metric gives the following error for the log metric we configured:

"Couldn't save ingested data. This metric key has reached the maximum number of tuples for a single metric for the last 30 days."

Do I understand correctly that this means we are only ingesting data for tuples (which is a unique set of dimensions for a metric, correct?) that were already ingested before we hit the limit?
According to the documentation, the limit for dimension tuples is 1 million for classic metrics and unlimited (excluding highly volatile dimensions) on Grail. I didn't think the classic metric limit applied since we do use Grail, but is that incorrect?
Does the log metric have to be configured in OpenPipeline in order to benefit from the unlimited dimensions on Grail? And while there are a large number of tuples for our log metric, I don't believe this to be highly volatile as they describe.

I've also created a DQL query to attempt to count the number of unique tuples within the timeframe of the query and it's showing a cumulative count of around 1.7 million by the end of the last 30 days, which is higher than the 1 million classic limit. So why are we getting that error message? Is data really getting dropped?

Here is the DQL query I mentioned for reference:

timeseries {count = sum(<log metric key>)}, by:{<split by all available dimensions>},interval: 6h

// get the index of first occurance of tuple
| fieldsAdd first_index = arrayIndexOf(count,arrayFirst(count))

// only count the first occrance
| fields timeframe, count = if(iIndex() == first_index, count[]/count[]), interval

// count of all new tuples over time
| summarize {timeframe = takeFirst(timeframe), dimension_count = sum(count[]), interval = takeFirst(interval)}

// add cumulative count over time and total count
| fieldsAdd cumulative_dimension_count = arrayCumulativeSum(dimension_count), total = arraySum(dimension_count)
1 REPLY 1

ZackE
Participant

I ended up opening a support case for this. Here's what we found:

  • Metrics must be extracted in OpenPipeline to benefit from unlimited dimensions on Grail
  • My query above was returning a count higher than 1 million because I was including system dt.*  dimensions in the "by:" statement, which do not count toward the classic 1 million tuple limit. Unfortunately, because metric queries stop scanning after a certain number of datapoints, I wasn't able to verify the tuple count capping at 1 million using that method.
  • Use the dsfm:server.metrics.metric_dimensions_usage as a reliable way to detect metrics reaching the dimension limits. 

Featured Posts