<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: More info on builtin metrics aggregations and filters in Automations</title>
    <link>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/200258#M1146</link>
    <description>&lt;P&gt;Hi, sorry for late reply I kind of lost your reply.&lt;/P&gt;&lt;P&gt;SLO works this way. Let's suppose that you want to check whether login time on your website lasts less than 10 seconds for at least 80% of your users.&lt;/P&gt;&lt;P&gt;SLO definiton will be:&lt;BR /&gt;(metric_that_counts_number_of_fast_logins)/(total_number_of logins)*100&lt;/P&gt;&lt;P&gt;the time window you define (-1h, -1d...) will determine the values of the operands of the calculation above and the final value of the SLO for that timeframe.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I hope I could explain how it basically works.&lt;/P&gt;&lt;P&gt;In your case you'd like to count how many single datapoints are above/below a threshold. You can't do it by using out of the box metrics, because they are already aggregated values, i.e. the initial datapoints are not exploitable for this purpose.&lt;/P&gt;&lt;P&gt;Bye&lt;BR /&gt;Paolo&lt;/P&gt;</description>
    <pubDate>Mon, 12 Dec 2022 15:37:52 GMT</pubDate>
    <dc:creator>paolo_fumanelli</dc:creator>
    <dc:date>2022-12-12T15:37:52Z</dc:date>
    <item>
      <title>More info on built-in metric aggregations and filters</title>
      <link>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/197911#M1141</link>
      <description>&lt;P&gt;I have a hard time finding the right documentation that describes how the aggregations and/or filters work on SLOs.&amp;nbsp;&lt;BR /&gt;Here is my use case: get the ratio of calls under 5 seconds for a certain keyRequest.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;((builtin:service.keyRequest.response.time:avg:partition("latency",value("good",lt(5000000))):splitBy():count:fold(sum))/(builtin:service.keyRequest.response.time:avg:splitBy():count:fold(sum))*100)&lt;BR /&gt;&lt;BR /&gt;with a filter:&amp;nbsp; type(service_method),entityName("/apiY"),fromRelationships.isServiceMethodOf(type(service_method_group),fromRelationships.isGroupOf(type(service),entityId("SERVICE-Z")))&lt;BR /&gt;&lt;BR /&gt;This setup is producing some numbers but are wrong. What am I missing?&amp;nbsp;&lt;BR /&gt;Also, where can I find more info describing how the aggregations are suppose to work? Thank you&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2022 14:17:26 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/197911#M1141</guid>
      <dc:creator>adtuser</dc:creator>
      <dc:date>2022-11-15T14:17:26Z</dc:date>
    </item>
    <item>
      <title>Re: More info on builtin metrics aggregations and filters</title>
      <link>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/198205#M1142</link>
      <description>&lt;P&gt;Hi, that's very tricky and I'm not sure it can be achieved without implementing a calculated metric, see the example below&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="calc_metric_2.jpg" style="width: 999px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/8455iD5F3617532918810/image-size/large?v=v2&amp;amp;px=999" role="button" title="calc_metric_2.jpg" alt="calc_metric_2.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;the calculated metric will become your numerator, denominator will be the total request count for the mentioned service method.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;More info about aggregations can be found here:&lt;/P&gt;&lt;P&gt;&lt;A href="https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/metric-selector#aggregation" target="_blank"&gt;Metrics API - Metric selector | Dynatrace Docs&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Bye&lt;/P&gt;</description>
      <pubDate>Fri, 11 Nov 2022 11:06:13 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/198205#M1142</guid>
      <dc:creator>paolo_fumanelli</dc:creator>
      <dc:date>2022-11-11T11:06:13Z</dc:date>
    </item>
    <item>
      <title>Re: More info on builtin metrics aggregations and filters</title>
      <link>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/198338#M1143</link>
      <description>&lt;P&gt;Thank you for your answer. The reference link helped.&lt;/P&gt;&lt;P&gt;I understand a computed metric (using DDUs) will provide the exact metric, but I want to understand what SLOs gives us.&lt;/P&gt;&lt;P&gt;It looks like the SLOs work on time slots (the finest time slot is 1 minute) and applies the desired aggregation at the time slot level.&amp;nbsp;&lt;BR /&gt;The SLO definition require defining a timeframe, is that the time slot? or are the time slots defined as here: (&lt;A href="https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/get-data-points#parameters" target="_blank"&gt;https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/get-data-points#parameters&lt;/A&gt;). I see the resolution depends on the query timeframe and the age of the data but if not specified it can use a default time slot of 120 data points.&lt;/P&gt;&lt;P&gt;If we consider the later, and assume we generate 120 data points per hour, and the default time slot is 120 data points, given the default SLO metric performance definition: ((builtin:service.response.time:avg:partition("latency",value("good",lt(10000))):splitBy():count:default(0))/(builtin:service.response.time:avg:splitBy():count)*(100)), will generate 2 values:&lt;BR /&gt;Value1 : corresponds the first 120 data points in the first hour = average(first 120 points)/120*100&lt;/P&gt;&lt;P&gt;Value2 : corresponds the next 120 data points in the second hour = average(last 120 points)/120*100&lt;/P&gt;&lt;P&gt;Will the SLO for two hours be the average of Value1 and Value2?&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 14 Nov 2022 21:39:18 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/198338#M1143</guid>
      <dc:creator>adtuser</dc:creator>
      <dc:date>2022-11-14T21:39:18Z</dc:date>
    </item>
    <item>
      <title>Re: More info on builtin metrics aggregations and filters</title>
      <link>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/198411#M1144</link>
      <description>&lt;P&gt;Hi, not 100% to get your question but the SLO is evaluated in a moving timeframe. For instance if you define a 1 day evaluation (-1d in the settings) every SLO datapoint/value will be evaluated with respect to the previous 24 hours. Then, the metric created within the SLO can be combined as a normal metric (so if apply an&amp;nbsp;&lt;STRONG&gt;avg&lt;/STRONG&gt; aggregation on such metric on a larger timeframe you'll get the average during that timeframe).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regarding the metric, one hint:&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;builtin:service.response.time:avg:partition("latency",value("good",lt(10000))):splitBy():count:default(0)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;you are just defining the partition "good" but you're not filtering only "good" response time entries. You should add a&amp;nbsp;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;STRONG&gt;filter&amp;nbsp;&lt;/STRONG&gt;transformation to filter by latency equals to "good".&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;By the way I still suggest you to use a calculated metric for this.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Bye&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2022 14:15:46 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/198411#M1144</guid>
      <dc:creator>paolo_fumanelli</dc:creator>
      <dc:date>2022-11-15T14:15:46Z</dc:date>
    </item>
    <item>
      <title>Re: More info on builtin metrics aggregations and filters</title>
      <link>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/198426#M1145</link>
      <description>&lt;P&gt;Thanks for the feedback but not sure I am any clearer on how the "Service Performance"-SLOs works.&amp;nbsp;&lt;BR /&gt;Let's use an example with a -2h timeframe and an app/setup that generates 200&amp;nbsp;monitoring data points (values) in two hours as follow:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;120 values the first hour with 20 points &amp;lt; 10000 (with the average of all 120 values be 12000)&lt;/LI&gt;&lt;LI&gt;80 values the second hour with 40 points &amp;lt; 10000 (with the average of all 80 values be 5000)&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;How will the Service Performance SLO be computed?&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Let's use this formula:&lt;BR /&gt;((builtin:service.response.time:avg:partition("latency",value("good",lt(10000))):splitBy():count:default(0))/(builtin:service.response.time:avg:splitBy():count)*(100))&lt;BR /&gt;&lt;EM&gt;Btw: the above was auto generated when selecting the "Service Performance SLO". My understanding is that if there is only one partition-value we don't need to have a filter for it. If am wrong, why doesn't Dynatrace add the filter to the autogenerated formula?&lt;/EM&gt;&lt;BR /&gt;Anyway, even if we add the "good" filter, as you suggested, the main question is how does it work?&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;If we use method(1) - that has 120 data points per time slice - then a formula could be:&lt;/P&gt;&lt;P&gt;Value1: 20/120*100=16.6%&lt;BR /&gt;Value2: 40/80*100=50%&lt;BR /&gt;The SLO for the 2 hours could be: (16.6+50)/2 =33.3%&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;If we use a method(2) that takes into account the entire time window of 2 hours, the&amp;nbsp; SLO for 2 hours could be (20+40)/200=30%.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;EM&gt;If all time slots have a fix count (and ignoring the edge time slots) the two methods produce the same value. But as I understood&amp;nbsp;the time slots can be based on time – not clear when that is used.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;Another method(3) could be that it uses the average of all the response times in the time slot. Then&amp;nbsp;SLO in this case will 0 (by checking if 12000 is less than 10000) for the first time slot and 1 (by checking if 5000 is less than 10000) for the second time slot and the SLO&amp;nbsp; for 2 hours could be: (0+1)/2=50%&lt;BR /&gt;&lt;BR /&gt;If none of the above 3 methods is accurate, using the above example can you please provide the expected SLO value (for 2 hour window) and the formula?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Lastly, if the same SLO definition (with 2 hour time window) is applied over 6 hours, is the expected value the average over the 3 time windows?&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;Again, I understand that using a calculated metric will be more clear, but I would like to understand how the Service Performance SLO works and what it does.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2022 17:19:55 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/198426#M1145</guid>
      <dc:creator>adtuser</dc:creator>
      <dc:date>2022-11-15T17:19:55Z</dc:date>
    </item>
    <item>
      <title>Re: More info on builtin metrics aggregations and filters</title>
      <link>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/200258#M1146</link>
      <description>&lt;P&gt;Hi, sorry for late reply I kind of lost your reply.&lt;/P&gt;&lt;P&gt;SLO works this way. Let's suppose that you want to check whether login time on your website lasts less than 10 seconds for at least 80% of your users.&lt;/P&gt;&lt;P&gt;SLO definiton will be:&lt;BR /&gt;(metric_that_counts_number_of_fast_logins)/(total_number_of logins)*100&lt;/P&gt;&lt;P&gt;the time window you define (-1h, -1d...) will determine the values of the operands of the calculation above and the final value of the SLO for that timeframe.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I hope I could explain how it basically works.&lt;/P&gt;&lt;P&gt;In your case you'd like to count how many single datapoints are above/below a threshold. You can't do it by using out of the box metrics, because they are already aggregated values, i.e. the initial datapoints are not exploitable for this purpose.&lt;/P&gt;&lt;P&gt;Bye&lt;BR /&gt;Paolo&lt;/P&gt;</description>
      <pubDate>Mon, 12 Dec 2022 15:37:52 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Automations/More-info-on-built-in-metric-aggregations-and-filters/m-p/200258#M1146</guid>
      <dc:creator>paolo_fumanelli</dc:creator>
      <dc:date>2022-12-12T15:37:52Z</dc:date>
    </item>
  </channel>
</rss>

