<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Efficient way to join several time series commands when some of them return null values - is there any other way? in DQL</title>
    <link>https://community.dynatrace.com/t5/DQL/Efficient-way-to-join-several-time-series-commands-when-some-of/m-p/292787#M3023</link>
    <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have a situation in which we want to calculate (from logs) the percentage of errors from the total number of requests based on officename and apinames. It could happen that some of the offices/api's will not experience any errors during the timeframe, hence its timeseries command will return null. Doing a math operation on null values will return nulls and as we want to calculate the percentages over time, we need to be able to "create" an array, equal in size to the number of samples (based on timeframe and interval), of zeros in such case. As I don't know of any function (coalesce will return a single zero and not an array) to do this I've done the following:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;  timeseries {
    total = sum(apigee.apiname_requestcount, default:0)//, scalar: true )
  },
  by: { destinationoffice, apiname },
  filter: { 
    not matchesValue(destinationoffice, "*-t.*") and not matchesValue(destinationoffice, "*-d.*") and
    not matchesValue(destinationoffice, "*.dev.*") and not matchesValue(destinationoffice, "*test*")
  }
| fields  interval=record(
          timeframe.start = timeframe[start] + interval * iIndex(),
          totalrequests=coalesce(total[],0),
          destinationoffice,
          apiname
          )
| expand interval
| fieldsFlatten interval
| fieldsRemove interval
// Add 4xx errors
| join [
  timeseries {
    errors = sum(apigee.apiname_requestcount, default:0)//, scalar: true )
  },
  by: { destinationoffice, apiname },
  filter: { 
    not matchesValue(destinationoffice, "*-t.*") and not matchesValue(destinationoffice, "*-d.*") and
    not matchesValue(destinationoffice, "*.dev.*") and not matchesValue(destinationoffice, "*test*") and
    responsecode &amp;gt;= 400 and responsecode &amp;lt;=499
  }
| fields  interval1=record(
          timeframe.start = timeframe[start] + interval * iIndex(),
          errors=coalesce(errors[],0),
          destinationoffice,
          apiname
          )
| expand interval1
| fieldsFlatten interval1
| fieldsRemove interval1
], on:{
  left[interval.timeframe.start]==right[interval1.timeframe.start], 
  left[interval.destinationoffice]==right[interval1.destinationoffice], 
  left[interval.apiname]==right[interval1.apiname] 
}, prefix:"error4xx.", kind:leftOuter
| fieldsAdd errors4xx = coalesce(error4xx.interval1.errors,0)
//add errors 5xx
| join [
  timeseries {
    errors = sum(apigee.apiname_requestcount, default:0)//, scalar: true )
  },
  by: { destinationoffice, apiname },
  filter: { 
    not matchesValue(destinationoffice, "*-t.*") and not matchesValue(destinationoffice, "*-d.*") and
    not matchesValue(destinationoffice, "*.dev.*") and not matchesValue(destinationoffice, "*test*") and
    responsecode &amp;gt;= 500 and responsecode &amp;lt;=599
  }
| fields  interval1=record(
          timeframe.start = timeframe[start] + interval * iIndex(),
          errors=coalesce(errors[],0),
          destinationoffice,
          apiname
          )
| expand interval1
| fieldsFlatten interval1
| fieldsRemove interval1
], on:{
  left[interval.timeframe.start]==right[interval1.timeframe.start], 
  left[interval.destinationoffice]==right[interval1.destinationoffice], 
  left[interval.apiname]==right[interval1.apiname] 
}, prefix:"error5xx.", kind:leftOuter
| fieldsAdd errors5xx = coalesce(error5xx.interval1.errors,0)
| fieldsAdd errorPct = coalesce((errors5xx+errors4xx)*100 / interval.totalrequests,0)
// | summarize {errors=collectArray(errorPct), start=min(interval.timeframe.start), end=max(interval.timeframe.start)  },by: {destinationoffice = interval.destinationoffice, apiname = interval.apiname}
| makeTimeseries errorPct=avg(errorPct), by: {destinationoffice = interval.destinationoffice, apiname = interval.apiname}, time: interval.timeframe.start, interval:1m&lt;/LI-CODE&gt;&lt;P&gt;Which is actually is based on the great tip from&amp;nbsp;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/19769"&gt;@marco_irmer&lt;/a&gt; found on&amp;nbsp;&lt;A title="DQL Query snippets query collections pro tips" href="https://community.dynatrace.com/t5/Dynatrace-tips/Pro-Tip-DQL-Query-Snippets-Collection/m-p/255109" target="_blank" rel="noopener"&gt;https://community.dynatrace.com/t5/Dynatrace-tips/Pro-Tip-DQL-Query-Snippets-Collection/m-p/255109&lt;/A&gt;&amp;nbsp;. What I've done is seperate the arrays into single value lines and used the coalesce to do the join and then in the end recreate the timeseries.&lt;/P&gt;&lt;P&gt;To all of you - DQL experts - any better way to do this?&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Sun, 11 Jan 2026 08:29:33 GMT</pubDate>
    <dc:creator>gilgi</dc:creator>
    <dc:date>2026-01-11T08:29:33Z</dc:date>
    <item>
      <title>Efficient way to join several time series commands when some of them return null values - is there any other way?</title>
      <link>https://community.dynatrace.com/t5/DQL/Efficient-way-to-join-several-time-series-commands-when-some-of/m-p/292787#M3023</link>
      <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have a situation in which we want to calculate (from logs) the percentage of errors from the total number of requests based on officename and apinames. It could happen that some of the offices/api's will not experience any errors during the timeframe, hence its timeseries command will return null. Doing a math operation on null values will return nulls and as we want to calculate the percentages over time, we need to be able to "create" an array, equal in size to the number of samples (based on timeframe and interval), of zeros in such case. As I don't know of any function (coalesce will return a single zero and not an array) to do this I've done the following:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;  timeseries {
    total = sum(apigee.apiname_requestcount, default:0)//, scalar: true )
  },
  by: { destinationoffice, apiname },
  filter: { 
    not matchesValue(destinationoffice, "*-t.*") and not matchesValue(destinationoffice, "*-d.*") and
    not matchesValue(destinationoffice, "*.dev.*") and not matchesValue(destinationoffice, "*test*")
  }
| fields  interval=record(
          timeframe.start = timeframe[start] + interval * iIndex(),
          totalrequests=coalesce(total[],0),
          destinationoffice,
          apiname
          )
| expand interval
| fieldsFlatten interval
| fieldsRemove interval
// Add 4xx errors
| join [
  timeseries {
    errors = sum(apigee.apiname_requestcount, default:0)//, scalar: true )
  },
  by: { destinationoffice, apiname },
  filter: { 
    not matchesValue(destinationoffice, "*-t.*") and not matchesValue(destinationoffice, "*-d.*") and
    not matchesValue(destinationoffice, "*.dev.*") and not matchesValue(destinationoffice, "*test*") and
    responsecode &amp;gt;= 400 and responsecode &amp;lt;=499
  }
| fields  interval1=record(
          timeframe.start = timeframe[start] + interval * iIndex(),
          errors=coalesce(errors[],0),
          destinationoffice,
          apiname
          )
| expand interval1
| fieldsFlatten interval1
| fieldsRemove interval1
], on:{
  left[interval.timeframe.start]==right[interval1.timeframe.start], 
  left[interval.destinationoffice]==right[interval1.destinationoffice], 
  left[interval.apiname]==right[interval1.apiname] 
}, prefix:"error4xx.", kind:leftOuter
| fieldsAdd errors4xx = coalesce(error4xx.interval1.errors,0)
//add errors 5xx
| join [
  timeseries {
    errors = sum(apigee.apiname_requestcount, default:0)//, scalar: true )
  },
  by: { destinationoffice, apiname },
  filter: { 
    not matchesValue(destinationoffice, "*-t.*") and not matchesValue(destinationoffice, "*-d.*") and
    not matchesValue(destinationoffice, "*.dev.*") and not matchesValue(destinationoffice, "*test*") and
    responsecode &amp;gt;= 500 and responsecode &amp;lt;=599
  }
| fields  interval1=record(
          timeframe.start = timeframe[start] + interval * iIndex(),
          errors=coalesce(errors[],0),
          destinationoffice,
          apiname
          )
| expand interval1
| fieldsFlatten interval1
| fieldsRemove interval1
], on:{
  left[interval.timeframe.start]==right[interval1.timeframe.start], 
  left[interval.destinationoffice]==right[interval1.destinationoffice], 
  left[interval.apiname]==right[interval1.apiname] 
}, prefix:"error5xx.", kind:leftOuter
| fieldsAdd errors5xx = coalesce(error5xx.interval1.errors,0)
| fieldsAdd errorPct = coalesce((errors5xx+errors4xx)*100 / interval.totalrequests,0)
// | summarize {errors=collectArray(errorPct), start=min(interval.timeframe.start), end=max(interval.timeframe.start)  },by: {destinationoffice = interval.destinationoffice, apiname = interval.apiname}
| makeTimeseries errorPct=avg(errorPct), by: {destinationoffice = interval.destinationoffice, apiname = interval.apiname}, time: interval.timeframe.start, interval:1m&lt;/LI-CODE&gt;&lt;P&gt;Which is actually is based on the great tip from&amp;nbsp;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/19769"&gt;@marco_irmer&lt;/a&gt; found on&amp;nbsp;&lt;A title="DQL Query snippets query collections pro tips" href="https://community.dynatrace.com/t5/Dynatrace-tips/Pro-Tip-DQL-Query-Snippets-Collection/m-p/255109" target="_blank" rel="noopener"&gt;https://community.dynatrace.com/t5/Dynatrace-tips/Pro-Tip-DQL-Query-Snippets-Collection/m-p/255109&lt;/A&gt;&amp;nbsp;. What I've done is seperate the arrays into single value lines and used the coalesce to do the join and then in the end recreate the timeseries.&lt;/P&gt;&lt;P&gt;To all of you - DQL experts - any better way to do this?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 11 Jan 2026 08:29:33 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Efficient-way-to-join-several-time-series-commands-when-some-of/m-p/292787#M3023</guid>
      <dc:creator>gilgi</dc:creator>
      <dc:date>2026-01-11T08:29:33Z</dc:date>
    </item>
    <item>
      <title>Re: Efficient way to join several time series commands when some of them return null values - is there any other way?</title>
      <link>https://community.dynatrace.com/t5/DQL/Efficient-way-to-join-several-time-series-commands-when-some-of/m-p/292791#M3024</link>
      <description>&lt;P&gt;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/12008"&gt;@gilgi&lt;/a&gt;,&amp;nbsp;your query looks quite overengineered&amp;nbsp;&lt;img class="lia-deferred-image lia-image-emoji" src="https://community.dynatrace.com/html/@6EDF483EF947B43E16DF999BED8ABCC0/images/emoticons/dynaspin.gif" alt=":dynaspin:" title=":dynaspin:" /&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;From what I see and deduce (maybe I missed something), it's doable in a single timeseries command, something like this:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;timeseries { 
  total=sum(apigee.apiname_requestcount, default: 0)
  , errors_4xx=sum(apigee.apiname_requestcount, default:0, filter:{ responsecode &amp;gt;= 400 and responsecode &amp;lt;500 })
  , errors_5xx=sum(apigee.apiname_requestcount, default:0, filter:{ http.response.status_code&amp;gt;=500 and http.response.status_code&amp;lt;600})
}
, union:{ true }
, by: { destinationoffice, apiname }
, filter: { 
    not matchesValue(destinationoffice, "*-t.*") and not matchesValue(destinationoffice, "*-d.*") and
    not matchesValue(destinationoffice, "*.dev.*") and not matchesValue(destinationoffice, "*test*")
  }&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;The key is:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;filter&lt;/STRONG&gt; can be applied to individual metric keys independently&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;union&lt;/STRONG&gt;&amp;nbsp;true is required if some of the datapoints can be absent, see &lt;A href="https://docs.dynatrace.com/docs/discover-dynatrace/platform/grail/dynatrace-query-language/commands/metric-commands#join-behavior-union-parameter" target="_blank" rel="noopener"&gt;docs&lt;/A&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Hope this helps. A &lt;A href="https://wkf10640.apps.dynatrace.com/ui/intent/dynatrace.notebooks/view-query#%7B%22visualization%22%3A%22lineChart%22%2C%22visualizationSettings%22%3A%7B%22autoSelectVisualization%22%3Atrue%7D%2C%22dt.timeframe%22%3A%7B%22from%22%3A%22now()-24h%22%2C%22to%22%3A%22now()%22%7D%2C%22dt.query%22%3A%22timeseries%20%7B%20%5Cn%20%20total%3Dsum(dt.service.request.count%2C%20default%3A%200)%5Cn%20%20%2C%20errors_4xx%3Dsum(dt.service.request.count%2C%20default%3A0%2C%20filter%3A%7B%20http.response.status_code%3E%3D400%20and%20http.response.status_code%3C500%7D)%5Cn%20%20%2C%20errors_5xx%3Dsum(dt.service.request.count%2C%20default%3A0%2C%20filter%3A%7B%20http.response.status_code%3E%3D500%20and%20http.response.status_code%3C600%7D)%5Cn%7D%2C%20by%3A%7Bdt.entity.service%7D%2C%20filter%3A%7B%20dt.entity.service%20%3D%3D%20%5C%22SERVICE-EB9383023479296B%5C%22%20%7D%2C%20union%3A%7B%20true%20%7D%5Cn%7C%20fieldsAdd%20errorRate%3Derrors_4xx%5B%5D%2Berrors_5xx%5B%5D%2Ftotal%5B%5D%22%2C%22hideInput%22%3Afalse%2C%22sourceApplication%22%3A%22dynatrace.notebooks%22%2C%22querySettings%22%3A%7B%22maxResultRecords%22%3A1000%2C%22defaultScanLimitGbytes%22%3A500%2C%22maxResultMegaBytes%22%3A1%2C%22defaultSamplingRatio%22%3A10%2C%22enableSampling%22%3Afalse%7D%7D" target="_self"&gt;sample&lt;/A&gt;&amp;nbsp;using the built-in service metric in the playground).&lt;/P&gt;</description>
      <pubDate>Mon, 12 Jan 2026 07:18:02 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Efficient-way-to-join-several-time-series-commands-when-some-of/m-p/292791#M3024</guid>
      <dc:creator>Julius_Loman</dc:creator>
      <dc:date>2026-01-12T07:18:02Z</dc:date>
    </item>
    <item>
      <title>Re: Efficient way to join several time series commands when some of them return null values - is there any other way?</title>
      <link>https://community.dynatrace.com/t5/DQL/Efficient-way-to-join-several-time-series-commands-when-some-of/m-p/292793#M3025</link>
      <description>&lt;P&gt;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/3364"&gt;@Julius_Loman&lt;/a&gt;&amp;nbsp;and this is why it says you're a legend!! thanks.&lt;/P&gt;</description>
      <pubDate>Mon, 12 Jan 2026 06:57:43 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Efficient-way-to-join-several-time-series-commands-when-some-of/m-p/292793#M3025</guid>
      <dc:creator>gilgi</dc:creator>
      <dc:date>2026-01-12T06:57:43Z</dc:date>
    </item>
  </channel>
</rss>

