<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Open Pipeline - processing logs from Azure in Log Analytics</title>
    <link>https://community.dynatrace.com/t5/Log-Analytics/Open-Pipeline-processing-logs-from-Azure/m-p/257172#M46</link>
    <description>&lt;P&gt;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/4804"&gt;@y_buccellato&lt;/a&gt;&amp;nbsp;If I got your situation right, the latter is true. OpenPipeline works on the ingestion. So you are working with raw data in the OpenPipeline as they are received.&lt;BR /&gt;There is the "Classic pipeline" which means the log processing rules and metric/event extractions you can find in the Settings -&amp;gt; Log Monitoring. So you will have to "replicate" some of the builtin processing rules in your pipeline - if they are needed in your case.&lt;/P&gt;</description>
    <pubDate>Thu, 26 Sep 2024 06:11:49 GMT</pubDate>
    <dc:creator>Julius_Loman</dc:creator>
    <dc:date>2024-09-26T06:11:49Z</dc:date>
    <item>
      <title>Open Pipeline - processing logs from Azure</title>
      <link>https://community.dynatrace.com/t5/Log-Analytics/Open-Pipeline-processing-logs-from-Azure/m-p/257091#M45</link>
      <description>&lt;P&gt;Good afternoon community,&lt;/P&gt;&lt;P&gt;struggling with OpenPipeline as I get confused by different settings in Dynatrace.&lt;/P&gt;&lt;P&gt;The original situation is the following:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;I "log forward" logs from Azure to DT via Dynatrace Azure Native Service - so far so good.&lt;/LI&gt;&lt;LI&gt;When data come in Dynatrace there is a processing rule (created by DT): [Built-in] cloud:azure&lt;SPAN&gt;:common&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;I create a bucket with a specific matcher to just pick some lines out of all the log - up until here everything is Vanilla&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;Now: I want to create a pipeline that parse exactly the same lines to extract some additional field.&lt;/SPAN&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN&gt;here is my doubt: during the processing step set up, should I consider the log lines as I can query it in Notebook (with step 2 applied)&amp;nbsp; or Should I consider only the "content" field of each line before step 2 is applied?&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;SPAN&gt;Already existing processing rule on logs gets me always confused with processing step in pipeline...&lt;BR /&gt;Thank you to whomever will clarify this,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;regards&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Sep 2024 13:29:53 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Log-Analytics/Open-Pipeline-processing-logs-from-Azure/m-p/257091#M45</guid>
      <dc:creator>y_buccellato</dc:creator>
      <dc:date>2024-09-25T13:29:53Z</dc:date>
    </item>
    <item>
      <title>Re: Open Pipeline - processing logs from Azure</title>
      <link>https://community.dynatrace.com/t5/Log-Analytics/Open-Pipeline-processing-logs-from-Azure/m-p/257172#M46</link>
      <description>&lt;P&gt;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/4804"&gt;@y_buccellato&lt;/a&gt;&amp;nbsp;If I got your situation right, the latter is true. OpenPipeline works on the ingestion. So you are working with raw data in the OpenPipeline as they are received.&lt;BR /&gt;There is the "Classic pipeline" which means the log processing rules and metric/event extractions you can find in the Settings -&amp;gt; Log Monitoring. So you will have to "replicate" some of the builtin processing rules in your pipeline - if they are needed in your case.&lt;/P&gt;</description>
      <pubDate>Thu, 26 Sep 2024 06:11:49 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Log-Analytics/Open-Pipeline-processing-logs-from-Azure/m-p/257172#M46</guid>
      <dc:creator>Julius_Loman</dc:creator>
      <dc:date>2024-09-26T06:11:49Z</dc:date>
    </item>
    <item>
      <title>Re: Open Pipeline - processing logs from Azure</title>
      <link>https://community.dynatrace.com/t5/Log-Analytics/Open-Pipeline-processing-logs-from-Azure/m-p/257174#M47</link>
      <description>&lt;P&gt;Thank you - I got this intuition when processing step in pipeline was not applying (matcher wasn't matching azure.&amp;lt;anyfield&amp;gt;) but now it's good that you confirm this too &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 26 Sep 2024 06:22:31 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Log-Analytics/Open-Pipeline-processing-logs-from-Azure/m-p/257174#M47</guid>
      <dc:creator>y_buccellato</dc:creator>
      <dc:date>2024-09-26T06:22:31Z</dc:date>
    </item>
    <item>
      <title>Re: Open Pipeline - processing logs from Azure</title>
      <link>https://community.dynatrace.com/t5/Log-Analytics/Open-Pipeline-processing-logs-from-Azure/m-p/257175#M48</link>
      <description>&lt;P&gt;So potentially if I build a pipeline in front of existing DT config I could break the built-in rule existing for Azure logs?&lt;/P&gt;</description>
      <pubDate>Thu, 26 Sep 2024 06:24:15 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Log-Analytics/Open-Pipeline-processing-logs-from-Azure/m-p/257175#M48</guid>
      <dc:creator>y_buccellato</dc:creator>
      <dc:date>2024-09-26T06:24:15Z</dc:date>
    </item>
  </channel>
</rss>

