<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Extract logs via API in DQL</title>
    <link>https://community.dynatrace.com/t5/DQL/Extract-logs-via-API/m-p/242517#M817</link>
    <description>&lt;P&gt;Hello!&lt;/P&gt;
&lt;P&gt;I need to extract a large volume of data from Grail. It was not possible through the Notebook because I reached the limit of 100000 records,&lt;/P&gt;
&lt;P&gt;I need data from the last 30 days, but I can extract it in 24-hour amounts to avoid problems.&lt;/P&gt;
&lt;P&gt;The difficulty I am having is that, when executing my query in "/query:execute", because it is large, I receive the status "RUNNING".&lt;/P&gt;
&lt;P&gt;Then I run "/query-poll" to receive the results. But two problems arise here. Or I get error 410, saying that the results are expired or the browser crashes (even though my computer has a good configuration), what would you recommend me to do?&lt;/P&gt;
&lt;P&gt;My body:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;{&lt;BR /&gt;"query": "fetch logs\n| filter dt.system.bucket==\"bucketABC\"\n| filter matchesValue(k8s.container.name, \"containerABC\") and matchesPhrase(content, \"content\")\n| parse content, \"\"\"DATA 'for customer ' SPACE? LD:CPF.passo1'\"'\"\"\"\n| fields `timestamp.passo1` = timestamp, `status.passo1` = status, `content.passo1` = content, CPF.passo1\n| lookup [fetch logs\n | filter dt.system.bucket==\"bucketABC\"\n\t | filter ((matchesValue(k8s.container.name, \"containerABC\") and matchesPhrase(content, \"content\") and matchesPhrase(content, \"content\")))\n\t | parse content, \"\"\"DATA 'customerId [' SPACE? LD:CPF.passo2']'\"\"\"\n | fields `timestamp.passo2` = timestamp, `status.passo2` = status, `content.passo2` = content, CPF.passo2], lookupField:CPF.passo2, sourceField:CPF.passo1, prefix:\"-\"\n | lookup [fetch logs\n | filter dt.system.bucket==\"bucketABC\"\n | filter ((matchesValue(k8s.container.name, \"containerABC\") and matchesPhrase(content, \"content\")))\n | parse content, \"\"\"DATA 'customerId [' SPACE? LD:CPF.passo3']'\"\"\"\n | fields `timestamp.passo3` = timestamp, `status.passo3` = status, `content.passo3` = content, CPF.passo3], lookupField:CPF.passo3, sourceField:CPF.passo1, prefix:\"--\"\n | lookup [fetch logs\n | filter dt.system.bucket==\"bucketABC\"\n | filter ((matchesValue(k8s.container.name, \"containerABC\") and (matchesPhrase(content, \"content\"))))\n | parse content, \"\"\"DATA 'customer ' SPACE? LD:CPF.passo4'\"'\"\"\"\n | fields `timestamp.passo4` = timestamp, `status.passo4` = status, `content.passo4` = content, CPF.passo4], lookupField:CPF.passo4, sourceField:CPF.passo1, prefix:\"---\"",&lt;BR /&gt;"defaultTimeframeStart": "2024-04-09T00:00:00.123Z",&lt;BR /&gt;"defaultTimeframeEnd": "2024-04-09T23:59:59.123Z",&lt;BR /&gt;"timezone": "GMT-3",&lt;BR /&gt;"locale": "en_US",&lt;BR /&gt;"maxResultRecords": 1000000000000,&lt;BR /&gt;"maxResultBytes": 1000000,&lt;BR /&gt;"fetchTimeoutSeconds": 600,&lt;BR /&gt;"requestTimeoutMilliseconds": 10000,&lt;BR /&gt;"enablePreview": true,&lt;BR /&gt;"defaultScanLimitGbytes": 500&lt;BR /&gt;}&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 15 Apr 2024 07:42:53 GMT</pubDate>
    <dc:creator>wellpplava</dc:creator>
    <dc:date>2024-04-15T07:42:53Z</dc:date>
    <item>
      <title>Extract logs via API</title>
      <link>https://community.dynatrace.com/t5/DQL/Extract-logs-via-API/m-p/242517#M817</link>
      <description>&lt;P&gt;Hello!&lt;/P&gt;
&lt;P&gt;I need to extract a large volume of data from Grail. It was not possible through the Notebook because I reached the limit of 100000 records,&lt;/P&gt;
&lt;P&gt;I need data from the last 30 days, but I can extract it in 24-hour amounts to avoid problems.&lt;/P&gt;
&lt;P&gt;The difficulty I am having is that, when executing my query in "/query:execute", because it is large, I receive the status "RUNNING".&lt;/P&gt;
&lt;P&gt;Then I run "/query-poll" to receive the results. But two problems arise here. Or I get error 410, saying that the results are expired or the browser crashes (even though my computer has a good configuration), what would you recommend me to do?&lt;/P&gt;
&lt;P&gt;My body:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;{&lt;BR /&gt;"query": "fetch logs\n| filter dt.system.bucket==\"bucketABC\"\n| filter matchesValue(k8s.container.name, \"containerABC\") and matchesPhrase(content, \"content\")\n| parse content, \"\"\"DATA 'for customer ' SPACE? LD:CPF.passo1'\"'\"\"\"\n| fields `timestamp.passo1` = timestamp, `status.passo1` = status, `content.passo1` = content, CPF.passo1\n| lookup [fetch logs\n | filter dt.system.bucket==\"bucketABC\"\n\t | filter ((matchesValue(k8s.container.name, \"containerABC\") and matchesPhrase(content, \"content\") and matchesPhrase(content, \"content\")))\n\t | parse content, \"\"\"DATA 'customerId [' SPACE? LD:CPF.passo2']'\"\"\"\n | fields `timestamp.passo2` = timestamp, `status.passo2` = status, `content.passo2` = content, CPF.passo2], lookupField:CPF.passo2, sourceField:CPF.passo1, prefix:\"-\"\n | lookup [fetch logs\n | filter dt.system.bucket==\"bucketABC\"\n | filter ((matchesValue(k8s.container.name, \"containerABC\") and matchesPhrase(content, \"content\")))\n | parse content, \"\"\"DATA 'customerId [' SPACE? LD:CPF.passo3']'\"\"\"\n | fields `timestamp.passo3` = timestamp, `status.passo3` = status, `content.passo3` = content, CPF.passo3], lookupField:CPF.passo3, sourceField:CPF.passo1, prefix:\"--\"\n | lookup [fetch logs\n | filter dt.system.bucket==\"bucketABC\"\n | filter ((matchesValue(k8s.container.name, \"containerABC\") and (matchesPhrase(content, \"content\"))))\n | parse content, \"\"\"DATA 'customer ' SPACE? LD:CPF.passo4'\"'\"\"\"\n | fields `timestamp.passo4` = timestamp, `status.passo4` = status, `content.passo4` = content, CPF.passo4], lookupField:CPF.passo4, sourceField:CPF.passo1, prefix:\"---\"",&lt;BR /&gt;"defaultTimeframeStart": "2024-04-09T00:00:00.123Z",&lt;BR /&gt;"defaultTimeframeEnd": "2024-04-09T23:59:59.123Z",&lt;BR /&gt;"timezone": "GMT-3",&lt;BR /&gt;"locale": "en_US",&lt;BR /&gt;"maxResultRecords": 1000000000000,&lt;BR /&gt;"maxResultBytes": 1000000,&lt;BR /&gt;"fetchTimeoutSeconds": 600,&lt;BR /&gt;"requestTimeoutMilliseconds": 10000,&lt;BR /&gt;"enablePreview": true,&lt;BR /&gt;"defaultScanLimitGbytes": 500&lt;BR /&gt;}&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 15 Apr 2024 07:42:53 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Extract-logs-via-API/m-p/242517#M817</guid>
      <dc:creator>wellpplava</dc:creator>
      <dc:date>2024-04-15T07:42:53Z</dc:date>
    </item>
    <item>
      <title>Re: Extract logs via API</title>
      <link>https://community.dynatrace.com/t5/DQL/Extract-logs-via-API/m-p/258528#M1308</link>
      <description>&lt;P&gt;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/69551"&gt;@wellpplava&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;You'll need a while loop until to until you get the SUCCEEDED state&lt;BR /&gt;&lt;BR /&gt;This is a python example I am using on my app:&lt;/P&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;LI-CODE lang="python"&gt;def get_results(bearer_token, requestToken):
    try:
        if bearer_token:
       
            url = 'https://{environmentid}.apps.dynatrace.com/platform/storage/query/v1/query:poll'
            headers = {
                "accept": "application/json",
                "Content-Type": "application/json",
                "Authorization": f"Bearer {bearer_token}"
            }
            params = {
                'request-token': requestToken,
                'request-timeout-milliseconds': '60',
                'enrich': 'metric-metadata',
            }

            response = requests.get(url, params=params, headers=headers)

            while(response.json()['state'] == 'RUNNING'):
                print(
                    f"Status: {response.json()['state']}\n"
                    f" Progress: {response.json()['progress']}\n"
                    f" Seconds running: {response.json()['ttlSeconds']}\n"
                    f"Trying in 2 sec...\n"
                )
                sleep(2)
                response = requests.get(url, params=params, headers=headers)
       
            if(response.json()['state'] == 'SUCCEEDED'):
           
                print(
                    f"Status: {response.status_code}\n"
                    f"State: {response.json()['state']}\n"
                    f"Returned records: {str(response.json()['result']['records'])[:50]}"
                )
                return response.json()['result']['records']  
           
            else:
           
                print(
                    f"Something is not right!\n"
                    f"Status: {response.status_code}\n"
                    f"{response.json()['error']['details']['errorMessage']}\n"
                    f"{response.json()['error']['details']['errorType']}"
                )
           
            return response.json()['result']['records']
       
        else:
       
            print("Failed to retrieve bearer token.")
    except Exception as e:
        print(f"Error: {str(e)}")
        return None&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;You'll need the bearertoken to perform the query and the request token that is returned when are starting the query.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;On top of that, I suggest increasing the bytes that you are retuning (at least for my use case, I need big chunks of data).&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;            data = {
                "query": query,
                # "defaultTimeframeStart": start_date,
                # "defaultTimeframeEnd": end_date,
                "timezone": timezone,
                "locale": region,
                "maxResultRecords": 1000000,
                "maxResultBytes": 100000000,
                "fetchTimeoutSeconds": 6000,
                "requestTimeoutMilliseconds": 1000,
                "enablePreview": False,
                "defaultScanLimitGbytes": 10000
            }&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 08 Oct 2024 06:47:09 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/DQL/Extract-logs-via-API/m-p/258528#M1308</guid>
      <dc:creator>MartinBurgos</dc:creator>
      <dc:date>2024-10-08T06:47:09Z</dc:date>
    </item>
  </channel>
</rss>

