on 26 Mar 2025 06:52 AM
There are differences to be considered when querying metrics in Dynatrace classic, i.e. when using the Metric API or the Data Explorer, and in the Dynatrace platform, querying Grail with DQL. This article goes through the main differences and things to keep in mind when querying both sides.
Not all of the classic metrics are available on the Dynatrace platform. For an overview of the exported metrics, see the documentation page Built-in Metrics on Grail.
Depending on the metric type, the metric key in the platform might be different than its correspondent classic metric. All mapping rules, including examples, can be found at the top section of Built-in Metrics on Grail. In a nutshell, the main points are:
Built-in metrics
In the platform, built-in metric keys were renamed to start with dt.
instead of builtin:
. Moreover, the keys in the platform use snake casing. Built-in Metrics on Grail also contains one-to-one mappings for the built-in metrics which can be just replaced by the new name or migration guides for the ones which don’t have a one-to-one replacement.
Extension metrics
Classic extension metrics with a builtin:tech
prefix appear in the platform with a legacy.
prefix instead. Classic extension metrics with an ext:
prefix will get the prefix removed in the platform.
Custom metrics
The keys of the metrics ingested via the Metrics v2 API, the OLTP endpoint and the EEC aren't changed. However, if their metric key is automatically suffixed with .count
or .gauge
in 2nd gen (see here for details), this suffix isn't added in the platform.
Calculated service metrics
Once classic calculated service metrics are available in the platform, (as of version 1.310), the metrics with a calc:service
prefix appear in the platform with a service.
prefix instead.
Tip:
We strongly recommend to use the automatic converter to find the correspondent platform metric key for a classic metric.
There might be differences in the values returned by metric queries in Dynatrace classic in comparison to the results from equivalent queries in the platform. Possible reasons include:
For recent time frames (approximately the last 3 minutes), the metric data might be available in classic, but not in the platform, and vice versa.
If the classic metrics ingestion limit is reached, data points will no longer be forwarded to Grail unless the tenant has the Metrics powered by Grail
DPS capability. In case of rejections, more data points might be missing on the platform compared to Dynatrace classic due to internal constraints like caching. Migrating to the Metrics powered by Grail
DPS capability resolves this issue. Once the Metrics powered by Grail
capability is used, rejections will only affect the Dynatrace classic.
In Dyntrace classic, you might see not all/no results if you face a metric query limit. In the example below there is no data shown because the metric query limit of 20 million datapoints is reached. On the platform, this limit is increased to 500 million datapoints.
For each classic metric there isn’t always a one-to-one corresponding metric in the platform containing exactly the same timeseries data. Metric families like service metrics and runtime metrics have metrics which are mapped to a Grail metric that has slightly different semantics.
Tip
The Service metrics migration guide, the Runtime metrics migration guide and Calculated service metrics guide provide detailed information about such cases.
For some metric families, the classic metrics and the platform metrics are collected and reported by a different mechanism at the OneAgent side. This could lead to slightly different values for correspondent metrics in classic/platform. Such metric families are:
Process metrics (aka “Generic technology metrics” in classic) (starting with the builtin:tech.generic
prefix)
Java metrics (starting with the builtin:tech.jvm
prefix)
Infrastructure metrics (starting with the builtin:host
prefix)
Webserver metrics (starting with the builtin:tech.webserver
prefix)
Not all metric selector operators available in Dynatrace classic have a correspondent function in DQL. The Metric selector conversion guide documents what is available. As a best practice, the automatic converter should be used to find the most appropriate replacement function(s) in DQL for a classic metric selector.
It’s worth mentioning that some operations have a slightly different behavior in the platform. Here are some differences to be considered.
The classic :splitBy
operator is covered by the by: parameter in DQL.
One basic difference is that in classic, the splitBy
is also a filter, meaning it will filter out dimension tuples where the splitted dimension is null
. That is not the case for by:
in in DQL.
As documented in the convertion troubleshooting section, the classic :percentile
operator will return different values as Grail’s percentile aggregation function, because DQL uses a more efficient algorithm.
The classic :rollup
operator is covered by array functions in DQL. However the array functions have different semantics and will return different results.
As documented in the count aggregation section, the classic :count
operator is not always simply converted to the count()
DQL aggregation function. Depending on the metric metadata, :count
could return the number of observations reported to a metric in Dynatrace classic. In this case the equivalent DQL function is sum(..., rollup: total)
.
The classic :avg
operator is not always simply converted to a plain avg()
DQL aggregation function. The same applies to :min
and :max
. Depending on its metadata, there are metrics which have only the value
aggregation available. For such metrics, in Dynatrace classic, the :avg
operator even requires a splitBy()
preceding it:metric_name:splitBy():avg
. The equivalent DQL for such a classic metric selector requires a rollup parameter be added to the DQL function avg(..., rollup: sum)
.
Example:
The classic metric builtin:billing.foundation_and_discovery.usage
has only the value aggregation available:
The equivalent DQL for the classic metric selector builtin:billing.foundation_and_discovery.usage:splitBy():avg
would be:
timeseries usage = avg(dt.billing.foundation_and_discovery.usage, rollup: sum)