<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Pro Tip: Extract and process huge log lines in Dynatrace tips</title>
    <link>https://community.dynatrace.com/t5/Dynatrace-tips/Pro-Tip-Extract-and-process-huge-log-lines/m-p/282195#M1706</link>
    <description>&lt;P&gt;There might be cases when you have to process log content that's huge. Let's say you want to monitor the Oracle Linux patch events.&lt;BR /&gt;It's not always easier because the patch events produce huge line of sometimes 1000s of lines under same timestamp (consider it similar to the output that you get when you do &lt;STRONG&gt;sudo apt-get update&amp;nbsp;&lt;/STRONG&gt;from that at the end you have something that needs to be converted to a bizevent.&lt;BR /&gt;In our case the kernel version needs to be converted to a bizevent if that exists. The kernel version however would be at the end of the log content.&amp;nbsp;&lt;BR /&gt;It would look like this.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Installed:
  kernel-4.18.0-553.33.1.el8_10.x86_64                                          
  kernel-core-4.18.0-553.33.1.el8_10.x86_64                                     
  kernel-modules-4.18.0-553.33.1.el8_10.x86_64 &lt;/LI-CODE&gt;&lt;P&gt;Almost 300000 characters and 2500+ lines before this Installed appears.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;We can clearly not use Open-pipeline here. Though it supports the processing of huge data here it would be multi-line data so building up a processor that extracts this becomes a nightmare.&lt;BR /&gt;However,&lt;BR /&gt;The good part is that the whole log content is being captured without truncation.&lt;BR /&gt;&lt;BR /&gt;The workaround for this is Open-pipeline's event extraction + Dynatrace workflows.&lt;BR /&gt;&lt;BR /&gt;Here's what worked for us.&lt;BR /&gt;When we see a log line that comes from patch log file, we create a Davis-event from OpenPipeline with this configuration.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Maheedhar_T_1-1753341545149.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/29160iF2E2AFAD4EB08B5C/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Maheedhar_T_1-1753341545149.png" alt="Maheedhar_T_1-1753341545149.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Make sure of adding the last property&amp;nbsp;&lt;STRONG&gt;dt.davis.is_problem_suppressed&amp;nbsp;&lt;/STRONG&gt;so as to avoid the problem noise. Whenever you get a log line detected in the Patch log this pipeline would be triggered and create a davis event.&lt;BR /&gt;&lt;BR /&gt;Next part is extracting this and converting it to Bizevents.&lt;BR /&gt;Create a new workflow with the Davis event trigger. In this case it would go like this.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Maheedhar_T_3-1753341759329.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/29162iD5A93B1DC9A47B0F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Maheedhar_T_3-1753341759329.png" alt="Maheedhar_T_3-1753341759329.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Now another challenge here is even when you've defined the event description to be content of the log line using wildcard {content} the content will be truncated due to character limits.&lt;BR /&gt;So, when the workflow is triggered, we need to fetch the log again using DQL. For that we fetch the host on which this patch happened using DQL.&lt;BR /&gt;&lt;BR /&gt;Next step of the workflow would be to get event details using DQL.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;fetch dt.davis.events
| filter matchesPhrase(event.name,"Patch")
| sort timestamp desc
| limit 1&lt;/LI-CODE&gt;&lt;P&gt;The limit 1 is to fetch data of one host but can be changed accordingly.&lt;BR /&gt;Next, we extract the host details programmatically using js.&lt;BR /&gt;Sample code:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;import { executionsClient } from '@dynatrace-sdk/client-automation';

export default async function ({ execution_id }) {
  // Step: Get result of 'fetch_event'
  const fetchEventConfig = { executionId: execution_id, id: 'fetch_event' };
  try {
    const fetchEventResult = await executionsClient.getTaskExecutionResult(fetchEventConfig);

    // Extract 'dt.entity.host' from the first record
    const hostId = fetchEventResult.records?.[0]?.["dt.entity.host"];

    if (hostId) {
      const result = { "host": hostId };
      console.log('Extracted Host ID:', JSON.stringify(result, null, 2));
      return result;
    } else {
      console.warn('Host ID not found in fetch_event result.');
      return { "host": null };
    }
  } catch (error) {
    console.error('Error fetching fetch_event result:', error);
    return { "host": null };
  }
}&lt;/LI-CODE&gt;&lt;P&gt;Then we get the log from that host using DQL:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;fetch logs
| filter matchesPhrase(dt.entity.host,"{{ result("get_host_id").host }}")
| sort timestamp desc
| limit 1&lt;/LI-CODE&gt;&lt;P&gt;Finally we extract the data and ingest it as Bizevents.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;import { executionsClient } from '@dynatrace-sdk/client-automation';
import { businessEventsClient } from '@dynatrace-sdk/client-classic-environment-v2';

export default async function ({ execution_id }) {
  try {
    // Step 1: Fetch logs
    const logsResult = await executionsClient.getTaskExecutionResult({
      executionId: execution_id,
      id: 'get_log',
    });

    const content = logsResult?.records?.[0]?.content;
    if (!content) {
      console.warn('No content found in get_logs result.');
      return;
    }

    // Extract timestamp from the beginning
    const timestampMatch = content.match(/^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})/);
    const timestamp = timestampMatch ? timestampMatch[1] : null;

    // Extract Installed section
    const installedSectionMatch = content.match(/Installed:\s*([\s\S]*?)(?:\n\S|\r\n\S)/);
    const installedSection = installedSectionMatch ? installedSectionMatch[1] : '';

    // Extract versions from Installed section
    const kernelLine = installedSection.split('\n').find(line =&amp;gt; line.includes('kernel-') &amp;amp;&amp;amp; !line.includes('kernel-core') &amp;amp;&amp;amp; !line.includes('kernel-modules'));
    const kernelCoreLine = installedSection.split('\n').find(line =&amp;gt; line.includes('kernel-core-'));
    const kernelModulesLine = installedSection.split('\n').find(line =&amp;gt; line.includes('kernel-modules-'));

    const kernelMatch = kernelLine?.match(/kernel-([\w\.\-]+\.x86_64)/);
    const kernelCoreMatch = kernelCoreLine?.match(/kernel-core-([\w\.\-]+\.x86_64)/);
    const kernelModulesMatch = kernelModulesLine?.match(/kernel-modules-([\w\.\-]+\.x86_64)/);

    if (!timestamp || !kernelMatch || !kernelCoreMatch || !kernelModulesMatch) {
      console.warn('Required kernel data not found in Installed section.');
      return;
    }

    // Step 2: Fetch hostname from fetch_event
    const fetchEventResult = await executionsClient.getTaskExecutionResult({
      executionId: execution_id,
      id: 'fetch_event',
    });

    const hostname = fetchEventResult?.records?.[0]?.["dt.entity.host"] || 'unknown-host';

    // Step 3: Construct and ingest business event
    const bizevent = {
      specversion: '1.0',
      source: 'patching.kernel.update',
      id: crypto.randomUUID().toString(),
      type: 'kernel.update.detected',
      data: {
        timestamp,
        hostname,
        kernel: kernelMatch[1],
        kernel_core: kernelCoreMatch[1],
        kernel_modules: kernelModulesMatch[1],
      },
    };

    await businessEventsClient.ingest({
      body: bizevent,
      type: 'application/cloudevent+json',
    });

    console.log('Business event ingested successfully:', JSON.stringify(bizevent, null, 2));
    return bizevent;
  } catch (error) {
    console.error('Failed to ingest business event:', error);
  }
}&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Note:&lt;BR /&gt;Here I have added limit 1 to all DQL to avoid ambiguity. You can change it according to your need. The basic Idea is when you can't directly process a log record you can use this workaround.&lt;BR /&gt;Oh yes and finally in the open-pipeline you can have a rule that you just store the log record for one day.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;Regards,&lt;BR /&gt;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/76275"&gt;@Maheedhar_T&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
    <pubDate>Thu, 24 Jul 2025 07:31:20 GMT</pubDate>
    <dc:creator>Maheedhar_T</dc:creator>
    <dc:date>2025-07-24T07:31:20Z</dc:date>
    <item>
      <title>Pro Tip: Extract and process huge log lines</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-tips/Pro-Tip-Extract-and-process-huge-log-lines/m-p/282195#M1706</link>
      <description>&lt;P&gt;There might be cases when you have to process log content that's huge. Let's say you want to monitor the Oracle Linux patch events.&lt;BR /&gt;It's not always easier because the patch events produce huge line of sometimes 1000s of lines under same timestamp (consider it similar to the output that you get when you do &lt;STRONG&gt;sudo apt-get update&amp;nbsp;&lt;/STRONG&gt;from that at the end you have something that needs to be converted to a bizevent.&lt;BR /&gt;In our case the kernel version needs to be converted to a bizevent if that exists. The kernel version however would be at the end of the log content.&amp;nbsp;&lt;BR /&gt;It would look like this.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Installed:
  kernel-4.18.0-553.33.1.el8_10.x86_64                                          
  kernel-core-4.18.0-553.33.1.el8_10.x86_64                                     
  kernel-modules-4.18.0-553.33.1.el8_10.x86_64 &lt;/LI-CODE&gt;&lt;P&gt;Almost 300000 characters and 2500+ lines before this Installed appears.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;We can clearly not use Open-pipeline here. Though it supports the processing of huge data here it would be multi-line data so building up a processor that extracts this becomes a nightmare.&lt;BR /&gt;However,&lt;BR /&gt;The good part is that the whole log content is being captured without truncation.&lt;BR /&gt;&lt;BR /&gt;The workaround for this is Open-pipeline's event extraction + Dynatrace workflows.&lt;BR /&gt;&lt;BR /&gt;Here's what worked for us.&lt;BR /&gt;When we see a log line that comes from patch log file, we create a Davis-event from OpenPipeline with this configuration.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Maheedhar_T_1-1753341545149.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/29160iF2E2AFAD4EB08B5C/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Maheedhar_T_1-1753341545149.png" alt="Maheedhar_T_1-1753341545149.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Make sure of adding the last property&amp;nbsp;&lt;STRONG&gt;dt.davis.is_problem_suppressed&amp;nbsp;&lt;/STRONG&gt;so as to avoid the problem noise. Whenever you get a log line detected in the Patch log this pipeline would be triggered and create a davis event.&lt;BR /&gt;&lt;BR /&gt;Next part is extracting this and converting it to Bizevents.&lt;BR /&gt;Create a new workflow with the Davis event trigger. In this case it would go like this.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Maheedhar_T_3-1753341759329.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/29162iD5A93B1DC9A47B0F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Maheedhar_T_3-1753341759329.png" alt="Maheedhar_T_3-1753341759329.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Now another challenge here is even when you've defined the event description to be content of the log line using wildcard {content} the content will be truncated due to character limits.&lt;BR /&gt;So, when the workflow is triggered, we need to fetch the log again using DQL. For that we fetch the host on which this patch happened using DQL.&lt;BR /&gt;&lt;BR /&gt;Next step of the workflow would be to get event details using DQL.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;fetch dt.davis.events
| filter matchesPhrase(event.name,"Patch")
| sort timestamp desc
| limit 1&lt;/LI-CODE&gt;&lt;P&gt;The limit 1 is to fetch data of one host but can be changed accordingly.&lt;BR /&gt;Next, we extract the host details programmatically using js.&lt;BR /&gt;Sample code:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;import { executionsClient } from '@dynatrace-sdk/client-automation';

export default async function ({ execution_id }) {
  // Step: Get result of 'fetch_event'
  const fetchEventConfig = { executionId: execution_id, id: 'fetch_event' };
  try {
    const fetchEventResult = await executionsClient.getTaskExecutionResult(fetchEventConfig);

    // Extract 'dt.entity.host' from the first record
    const hostId = fetchEventResult.records?.[0]?.["dt.entity.host"];

    if (hostId) {
      const result = { "host": hostId };
      console.log('Extracted Host ID:', JSON.stringify(result, null, 2));
      return result;
    } else {
      console.warn('Host ID not found in fetch_event result.');
      return { "host": null };
    }
  } catch (error) {
    console.error('Error fetching fetch_event result:', error);
    return { "host": null };
  }
}&lt;/LI-CODE&gt;&lt;P&gt;Then we get the log from that host using DQL:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;fetch logs
| filter matchesPhrase(dt.entity.host,"{{ result("get_host_id").host }}")
| sort timestamp desc
| limit 1&lt;/LI-CODE&gt;&lt;P&gt;Finally we extract the data and ingest it as Bizevents.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;import { executionsClient } from '@dynatrace-sdk/client-automation';
import { businessEventsClient } from '@dynatrace-sdk/client-classic-environment-v2';

export default async function ({ execution_id }) {
  try {
    // Step 1: Fetch logs
    const logsResult = await executionsClient.getTaskExecutionResult({
      executionId: execution_id,
      id: 'get_log',
    });

    const content = logsResult?.records?.[0]?.content;
    if (!content) {
      console.warn('No content found in get_logs result.');
      return;
    }

    // Extract timestamp from the beginning
    const timestampMatch = content.match(/^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})/);
    const timestamp = timestampMatch ? timestampMatch[1] : null;

    // Extract Installed section
    const installedSectionMatch = content.match(/Installed:\s*([\s\S]*?)(?:\n\S|\r\n\S)/);
    const installedSection = installedSectionMatch ? installedSectionMatch[1] : '';

    // Extract versions from Installed section
    const kernelLine = installedSection.split('\n').find(line =&amp;gt; line.includes('kernel-') &amp;amp;&amp;amp; !line.includes('kernel-core') &amp;amp;&amp;amp; !line.includes('kernel-modules'));
    const kernelCoreLine = installedSection.split('\n').find(line =&amp;gt; line.includes('kernel-core-'));
    const kernelModulesLine = installedSection.split('\n').find(line =&amp;gt; line.includes('kernel-modules-'));

    const kernelMatch = kernelLine?.match(/kernel-([\w\.\-]+\.x86_64)/);
    const kernelCoreMatch = kernelCoreLine?.match(/kernel-core-([\w\.\-]+\.x86_64)/);
    const kernelModulesMatch = kernelModulesLine?.match(/kernel-modules-([\w\.\-]+\.x86_64)/);

    if (!timestamp || !kernelMatch || !kernelCoreMatch || !kernelModulesMatch) {
      console.warn('Required kernel data not found in Installed section.');
      return;
    }

    // Step 2: Fetch hostname from fetch_event
    const fetchEventResult = await executionsClient.getTaskExecutionResult({
      executionId: execution_id,
      id: 'fetch_event',
    });

    const hostname = fetchEventResult?.records?.[0]?.["dt.entity.host"] || 'unknown-host';

    // Step 3: Construct and ingest business event
    const bizevent = {
      specversion: '1.0',
      source: 'patching.kernel.update',
      id: crypto.randomUUID().toString(),
      type: 'kernel.update.detected',
      data: {
        timestamp,
        hostname,
        kernel: kernelMatch[1],
        kernel_core: kernelCoreMatch[1],
        kernel_modules: kernelModulesMatch[1],
      },
    };

    await businessEventsClient.ingest({
      body: bizevent,
      type: 'application/cloudevent+json',
    });

    console.log('Business event ingested successfully:', JSON.stringify(bizevent, null, 2));
    return bizevent;
  } catch (error) {
    console.error('Failed to ingest business event:', error);
  }
}&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Note:&lt;BR /&gt;Here I have added limit 1 to all DQL to avoid ambiguity. You can change it according to your need. The basic Idea is when you can't directly process a log record you can use this workaround.&lt;BR /&gt;Oh yes and finally in the open-pipeline you can have a rule that you just store the log record for one day.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;Regards,&lt;BR /&gt;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/76275"&gt;@Maheedhar_T&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 24 Jul 2025 07:31:20 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-tips/Pro-Tip-Extract-and-process-huge-log-lines/m-p/282195#M1706</guid>
      <dc:creator>Maheedhar_T</dc:creator>
      <dc:date>2025-07-24T07:31:20Z</dc:date>
    </item>
    <item>
      <title>Re: Pro Tip: Extract and process huge log lines</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-tips/Pro-Tip-Extract-and-process-huge-log-lines/m-p/282909#M1712</link>
      <description>&lt;P&gt;This is very useful, Maheedhar.&amp;nbsp;&lt;BR /&gt;Thanks for sharing.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 04 Aug 2025 03:41:09 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-tips/Pro-Tip-Extract-and-process-huge-log-lines/m-p/282909#M1712</guid>
      <dc:creator>theharithsa</dc:creator>
      <dc:date>2025-08-04T03:41:09Z</dc:date>
    </item>
  </channel>
</rss>

