cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

DQL - column with the number of request count that are greater than value of other column

sa_gopalm
Visitor

timeseries { value. A = avg(dt.service.request.response_time, scalar: true),
value. B = min(dt.service.request.response_time, scalar: true),
value. C = max(dt.service.request.response_time, scalar: true),
value. D = percentile(dt.service.request.response_time,90, scalar: true),
value. E = percentile(dt.service.request.response_time,99, scalar: true),
value. F = sum(dt.service.request.count,scalar:true)},
union: TRUE,
by: {endpoint.name},
filter: { matchesValue(entityAttr(dt.entity.kubernetes_cluster, "entity.name"), "xxx")}

 

sa_gopalm_2-1749816867427.png


can anyone help me with this query, as i need to another column where i need the number of request count that are greater than value D.

for eg. the first value for column D is 62.82ms so that i need the request count that are greater than 62.82ms, similarly for the other values in D columns 

 



3 REPLIES 3

marco_irmer
Champion

The definition of the xx percentile aggregation, as it relates to the dt.service.request.response_time metric, can be thought of as the time within which xx% of requests completed. For example, if the 90th percentile value for response time is 62ms, this means that 90% of requests completed in less than 62ms. Therefore, 10% of requests took longer than 62ms. We can use this knowledge to estimate the number of requests taking longer than 62ms as (Total Requests * 10%).

In DQL, you would add the following line to accomplish this (where 'value.F' represents the total number of requests):

| fieldsAdd requestsGreaterThanValueD = value.F * 0.10

You can repeat the same logic for the other column as well.

sa_gopalm
Visitor

Hi @marco_irmer ,

Thanks for the response, much appreciated, i too tried this but there is a huge difference that i can see in MDA.

Is there is any way we can filter out using response time greater than 

The MDA will give different results because it is powered by trace data in the background. This means every time you view the data, Dynatrace is analyzing all the raw traces to count up how many took more than a certain amount of time. 

The metrics for service response time are aggregated as timeseries data, which means that some of the detail available in the raw traces is lost. This is why percentile aggregations on metrics are technically always 'estimates' with a certain margin of error. 

Depending on your needs, there are a few different options:

  1. Accept the estimated figures provided by the dt.service.request.response_time metric as they are.
  2. Change your DQL query to use span data instead, to obtain the most accurate results. Some DQL 'snippets' are available on the 'New Section' dialog that should help you get started with this. Note that trace data has a different retention period than metrics.
  3. Consider making use of SLOs to measure what portion of requests take longer than xx ms. The main drawback with SLOs here will be that you have to choose a threshold up-front.

 

Featured Posts