<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: DQL too much data read message in DQL</title>
    <link>https://community.dynatrace.com/t5/DQL/DQL-too-much-data-read-message/m-p/214012#M69</link>
    <description>&lt;P&gt;The limitation is based on the implementation of the lookup command, which collects the data from the nested execution block. Keeping the full pgi information in memory is a bit too much.&lt;BR /&gt;&lt;BR /&gt;As the timeseries query is the shorter/less expensive part of your query, you could also turn the query around.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="java"&gt;fetch dt.entity.process_group_instance
| lookup [
    timeseries cpuUsage = avg(dt.process.cpu.usage), by:{host.name, dt.entity.process_group_instance}, interval:1h
    | sort arrayAvg(cpuUsage), direction:"descending"
    | limit 5
], sourceField:id, lookupField:dt.entity.process_group_instance
| ....&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;you can also specify the fields/ prefix in the lookup command to add the fields that you want in your result,...&lt;BR /&gt;But I guess this example here might be a good starting point.&lt;BR /&gt;As the inner execution block now contains only 5 elements, this should make the query cheaper and faster - and no limitations would need to apply.&lt;/P&gt;</description>
    <pubDate>Mon, 05 Jun 2023 08:11:14 GMT</pubDate>
    <dc:creator>David_Hauger</dc:creator>
    <dc:date>2023-06-05T08:11:14Z</dc:date>
    <item>
      <title>DQL too much data read message</title>
      <link>https://community.dynatrace.com/t5/DQL/DQL-too-much-data-read-message/m-p/213931#M68</link>
      <description>&lt;P&gt;Is there any way to get around the below message without limiting my results and making multiple queries? My dql is below, we have 2.75k hosts in that mgmt zone.&lt;/P&gt;
&lt;P&gt;The lookup command's subquery read too much data. Please continue to filter the lookup table or narrow the query time range&lt;/P&gt;
&lt;P&gt;timeseries cpuUsage = avg(dt.process.cpu.usage), by:{host.name, dt.entity.process_group_instance}, interval:1h, from:-2h, to:now()&lt;BR /&gt;| sort arrayAvg(cpuUsage) desc&lt;BR /&gt;| limit 5&lt;BR /&gt;| lookup [fetch dt.entity.process_group_instance], sourceField:dt.entity.process_group_instance, lookupField:id&lt;BR /&gt;| filter matchesValue(lookup.managementZones, "UNIX")&lt;BR /&gt;| fieldsRename process_group_instance.name = lookup.entity.name&lt;BR /&gt;| fieldsRemove lookup.id, dt.entity.process_group_instance&lt;/P&gt;</description>
      <pubDate>Mon, 05 Jun 2023 06:49:51 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/DQL-too-much-data-read-message/m-p/213931#M68</guid>
      <dc:creator>sivart_89</dc:creator>
      <dc:date>2023-06-05T06:49:51Z</dc:date>
    </item>
    <item>
      <title>Re: DQL too much data read message</title>
      <link>https://community.dynatrace.com/t5/DQL/DQL-too-much-data-read-message/m-p/214012#M69</link>
      <description>&lt;P&gt;The limitation is based on the implementation of the lookup command, which collects the data from the nested execution block. Keeping the full pgi information in memory is a bit too much.&lt;BR /&gt;&lt;BR /&gt;As the timeseries query is the shorter/less expensive part of your query, you could also turn the query around.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="java"&gt;fetch dt.entity.process_group_instance
| lookup [
    timeseries cpuUsage = avg(dt.process.cpu.usage), by:{host.name, dt.entity.process_group_instance}, interval:1h
    | sort arrayAvg(cpuUsage), direction:"descending"
    | limit 5
], sourceField:id, lookupField:dt.entity.process_group_instance
| ....&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;you can also specify the fields/ prefix in the lookup command to add the fields that you want in your result,...&lt;BR /&gt;But I guess this example here might be a good starting point.&lt;BR /&gt;As the inner execution block now contains only 5 elements, this should make the query cheaper and faster - and no limitations would need to apply.&lt;/P&gt;</description>
      <pubDate>Mon, 05 Jun 2023 08:11:14 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/DQL-too-much-data-read-message/m-p/214012#M69</guid>
      <dc:creator>David_Hauger</dc:creator>
      <dc:date>2023-06-05T08:11:14Z</dc:date>
    </item>
  </channel>
</rss>

