<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Timeseries - Filter each datapoint by value threshold in DQL</title>
    <link>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/269968#M1700</link>
    <description>&lt;P&gt;I am not sure if I got the requirement correctly. Here are my thought:&lt;BR /&gt;&lt;BR /&gt;To get timeseries (cpu usage in my example) where at least one value is greater the predefined threshold &lt;STRONG&gt;&lt;EM&gt;iAny&lt;/EM&gt;&lt;/STRONG&gt; function can help&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;timeseries cpu=avg(dt.host.cpu.usage), by: {dt.entity.host}
| filter iAny(cpu[]&amp;gt;80)&lt;/LI-CODE&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="krzysztof_hoja_0-1739432891493.png" style="width: 858px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/26394i44E8023F3BEA711F/image-dimensions/858x415?v=v2" width="858" height="415" role="button" title="krzysztof_hoja_0-1739432891493.png" alt="krzysztof_hoja_0-1739432891493.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;and to see in the final result only datapoint matching this conditions "iterative expression" is useful:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;timeseries cpu=avg(dt.host.cpu.usage), by: {dt.entity.host}
| filter iAny(cpu[]&amp;gt;80)
| fieldsAdd cpu = if(cpu[]&amp;gt;80, cpu[])&lt;/LI-CODE&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="krzysztof_hoja_1-1739433046090.png" style="width: 856px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/26395iCF02E1F8931D01AE/image-dimensions/856x426?v=v2" width="856" height="426" role="button" title="krzysztof_hoja_1-1739433046090.png" alt="krzysztof_hoja_1-1739433046090.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 13 Feb 2025 07:53:07 GMT</pubDate>
    <dc:creator>krzysztof_hoja</dc:creator>
    <dc:date>2025-02-13T07:53:07Z</dc:date>
    <item>
      <title>Filter Time-Series Data Points by Value Threshold Using DQL</title>
      <link>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/269779#M1692</link>
      <description>&lt;P&gt;Hi!&lt;/P&gt;
&lt;P&gt;I have a values-type metric. I'd like to display a timeseries that only counts datapoints whose value is greater than (or less than) an arbitrary value.&amp;nbsp;I don't want to filter an aggregation, but analyze each value and remove the ones I don't want.&lt;/P&gt;
&lt;P&gt;How can I achieve this?&lt;/P&gt;
&lt;P&gt;Thanks for your help!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Dec 2025 12:35:52 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/269779#M1692</guid>
      <dc:creator>jegron</dc:creator>
      <dc:date>2025-12-18T12:35:52Z</dc:date>
    </item>
    <item>
      <title>Re: Timeseries - Filter each datapoint by value threshold</title>
      <link>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/269968#M1700</link>
      <description>&lt;P&gt;I am not sure if I got the requirement correctly. Here are my thought:&lt;BR /&gt;&lt;BR /&gt;To get timeseries (cpu usage in my example) where at least one value is greater the predefined threshold &lt;STRONG&gt;&lt;EM&gt;iAny&lt;/EM&gt;&lt;/STRONG&gt; function can help&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;timeseries cpu=avg(dt.host.cpu.usage), by: {dt.entity.host}
| filter iAny(cpu[]&amp;gt;80)&lt;/LI-CODE&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="krzysztof_hoja_0-1739432891493.png" style="width: 858px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/26394i44E8023F3BEA711F/image-dimensions/858x415?v=v2" width="858" height="415" role="button" title="krzysztof_hoja_0-1739432891493.png" alt="krzysztof_hoja_0-1739432891493.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;and to see in the final result only datapoint matching this conditions "iterative expression" is useful:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;timeseries cpu=avg(dt.host.cpu.usage), by: {dt.entity.host}
| filter iAny(cpu[]&amp;gt;80)
| fieldsAdd cpu = if(cpu[]&amp;gt;80, cpu[])&lt;/LI-CODE&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="krzysztof_hoja_1-1739433046090.png" style="width: 856px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/26395iCF02E1F8931D01AE/image-dimensions/856x426?v=v2" width="856" height="426" role="button" title="krzysztof_hoja_1-1739433046090.png" alt="krzysztof_hoja_1-1739433046090.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 13 Feb 2025 07:53:07 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/269968#M1700</guid>
      <dc:creator>krzysztof_hoja</dc:creator>
      <dc:date>2025-02-13T07:53:07Z</dc:date>
    </item>
    <item>
      <title>Re: Timeseries - Filter each datapoint by value threshold</title>
      <link>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/270239#M1719</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/25373"&gt;@krzysztof_hoja&lt;/a&gt;&amp;nbsp;!&lt;/P&gt;&lt;P&gt;Thanks for your help.&amp;nbsp;But you are filtering the result of avg(dt.host.cpu.usage). I would like to filter each data point independently before any aggregation. I would like to build response time SLO for example &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt; I can do it easily with fetch logs but I can't find a solution with the timeseries command ...&lt;/P&gt;</description>
      <pubDate>Mon, 17 Feb 2025 16:09:36 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/270239#M1719</guid>
      <dc:creator>jegron</dc:creator>
      <dc:date>2025-02-17T16:09:36Z</dc:date>
    </item>
    <item>
      <title>Re: Timeseries - Filter each datapoint by value threshold</title>
      <link>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/270376#M1724</link>
      <description>&lt;P&gt;You cannot find it, because it does not exists in generic case for timeseries &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I used&amp;nbsp;dt.host.cpu.usage with breakdown by host sort of on purpose. Let's consider this query:&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;timeseries { cpu=avg(dt.host.cpu.usage), cpu_t=sum(dt.host.cpu.usage, rollup:total) } , by: {dt.entity.ec2_instance, dt.entity.host}
| filter dt.entity.host == "HOST-937E3C790B64E8B5"&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Besides plain average I added second timeseries: sum with rollup:total. This additional metric will tell us how many contributions aka raw measurement happened. Result looks like this:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="krzysztof_hoja_0-1739910114535.png" style="width: 823px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/26487iE734CF3A31D28407/image-dimensions/823x144?v=v2" width="823" height="144" role="button" title="krzysztof_hoja_0-1739910114535.png" alt="krzysztof_hoja_0-1739910114535.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;The value of cpu_t is constantly 6 every minute because reading of cpu usage for host happens every 10 sec. But this individual measurement are not stored. What is stored and is in fact most granular "data point" is statistical description of what happened containing in basic case 4 values: min, max, sum and count (sum and count allows to calculate average). From this compounds bigger aggregates can be calculated like for host groups or for longer intervals.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If we take a look at similar query:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;timeseries { rt=avg(dt.service.request.response_time), rt_t=sum(dt.service.request.response_time, rollup:total) } , by: {dt.entity.service}
, filter: dt.entity.service == "SERVICE-CB0AFF6C5BC4EABE"&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;and result&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="krzysztof_hoja_0-1739910740305.png" style="width: 827px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/26488iB6B725792A2D8AD5/image-dimensions/827x184?v=v2" width="827" height="184" role="button" title="krzysztof_hoja_0-1739910740305.png" alt="krzysztof_hoja_0-1739910740305.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;you can see that number of contributions is variable: these are actuals requests. But this metrics has also additional dimension which can allow to look at it at more granular way. Adding "endpoint.name" allows to look at this at more granular way:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="krzysztof_hoja_1-1739910948875.png" style="width: 792px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/26489i77242C3530140D79/image-dimensions/792x379?v=v2" width="792" height="379" role="button" title="krzysztof_hoja_1-1739910948875.png" alt="krzysztof_hoja_1-1739910948875.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;You can even look deeper by splitting requests into successful and failed, but in generic case you will get to the point when you get datapoint representing single requests. You may just have it by chance when only one request fell in the specific bucket.&lt;BR /&gt;&lt;BR /&gt;The basic idea of metric is to have aggregated view on a process (bucketized time, selected dimensions only) - you loos details but you gain easy and fast access. If details are needed: for some cases we have span&amp;nbsp; to look deeper (service.request.response_time can be recreated from spans if no sampling occurs), but also for some we do not keep details.&lt;/P&gt;</description>
      <pubDate>Tue, 18 Feb 2025 20:43:24 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/270376#M1724</guid>
      <dc:creator>krzysztof_hoja</dc:creator>
      <dc:date>2025-02-18T20:43:24Z</dc:date>
    </item>
    <item>
      <title>Re: Timeseries - Filter each datapoint by value threshold</title>
      <link>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/273081#M1837</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/25373"&gt;@krzysztof_hoja&lt;/a&gt;&amp;nbsp;! Thanks for explanation. Is there a roadmap to implement filtering on raw datapoints, even if it results in slower performance?&lt;/P&gt;</description>
      <pubDate>Thu, 20 Mar 2025 15:24:26 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/273081#M1837</guid>
      <dc:creator>jegron</dc:creator>
      <dc:date>2025-03-20T15:24:26Z</dc:date>
    </item>
    <item>
      <title>Re: Timeseries - Filter each datapoint by value threshold</title>
      <link>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/277158#M2073</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/25373"&gt;@krzysztof_hoja&lt;/a&gt;&amp;nbsp;!&lt;/P&gt;&lt;P class=""&gt;This is still an ongoing issue for us. Due to the high cost of handling a large volume of logs, we rely primarily on metrics. However, Dynatrace SLOs suffer from a lack of precision caused by data aggregation. Our customer is comparing our new Dynatrace SLOs with their legacy SLOs from Splunk, which are based on raw log data, and the results are not consistent (i.e., not idempotent).&lt;/P&gt;&lt;P class=""&gt;Is there anything new planned on your side to address this issue?&lt;/P&gt;</description>
      <pubDate>Wed, 14 May 2025 09:13:33 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/277158#M2073</guid>
      <dc:creator>jegron</dc:creator>
      <dc:date>2025-05-14T09:13:33Z</dc:date>
    </item>
    <item>
      <title>Re: Timeseries - Filter each datapoint by value threshold</title>
      <link>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/277203#M2080</link>
      <description>&lt;P&gt;What's the actual definition of SLO?&lt;/P&gt;</description>
      <pubDate>Wed, 14 May 2025 14:54:29 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/277203#M2080</guid>
      <dc:creator>krzysztof_hoja</dc:creator>
      <dc:date>2025-05-14T14:54:29Z</dc:date>
    </item>
    <item>
      <title>Re: Timeseries - Filter each datapoint by value threshold</title>
      <link>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/277267#M2083</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/25373"&gt;@krzysztof_hoja&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Currently, on Splunk, SLO are definied based on the ratio of response time that is under a threshold.&lt;/P&gt;&lt;P&gt;That is somethink like :&amp;nbsp;&lt;/P&gt;&lt;P&gt;index = my_app duration&amp;gt;0&lt;BR /&gt;| eval threshold = 0.5&lt;BR /&gt;| eval count_under_threshold = if(duration &amp;lt; threshold, 1, 0)&lt;/P&gt;&lt;P&gt;| timechart sum(count_under_threshold) as count_under_threshold, count(duration) as&amp;nbsp;count_all span=1d&lt;/P&gt;&lt;P&gt;| eval sli = 100*count_under_threshold/count_all&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So, each duration is analyzed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In Dynatrace and Timeseries, we need to firstly agregate duration into avg/max/min with 1m resolution, and then compare it to the threshold.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 15 May 2025 08:21:29 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/277267#M2083</guid>
      <dc:creator>J01am</dc:creator>
      <dc:date>2025-05-15T08:21:29Z</dc:date>
    </item>
    <item>
      <title>Re: Timeseries - Filter each datapoint by value threshold</title>
      <link>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/277453#M2096</link>
      <description>&lt;P&gt;Ok, so it it for specific set of requests....&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Please consider creating two metric for this purpose: count of all spans/request and count of "slow" spans/requests.&amp;nbsp;&lt;BR /&gt;Alternatively you can use built in metric (dt.service.request.count) if it can act ad denominator (has right content and/or all needed dimensions to extract subset you need)&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 19 May 2025 06:48:33 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Filter-Time-Series-Data-Points-by-Value-Threshold-Using-DQL/m-p/277453#M2096</guid>
      <dc:creator>krzysztof_hoja</dc:creator>
      <dc:date>2025-05-19T06:48:33Z</dc:date>
    </item>
  </channel>
</rss>

