<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Logs on Grail - efficient maintenance of processing rules using OpenPipeline (or any other methods in Dynatrace) in Log Analytics</title>
    <link>https://community.dynatrace.com/t5/Log-Analytics/Logs-on-Grail-efficient-maintenance-of-processing-rules-using/m-p/267613#M1184</link>
    <description>&lt;P&gt;Great question, we hear it quite often. Currently there's no great way to solve this problem, but we are working on a feature that will allow to combine pipelines and add their ingest functionality onto another one.&lt;/P&gt;
&lt;P&gt;This way it's going to be possible to combine functionality that comes Out-of-the-box by Dynatrace with custom processors and it's possible to create global (or widely applicable) pipelines (e.g. for global bucket and security context assignment; or for common technology parsing that is identical across many pipelines) and combine those. In combination with our upcoming fine-grained permissions this will also allow for access and edit control of stages, so that e.g. teams cannot interfere with a global bucket assignment scheme.&lt;/P&gt;</description>
    <pubDate>Thu, 16 Jan 2025 10:30:33 GMT</pubDate>
    <dc:creator>thomas_billi</dc:creator>
    <dc:date>2025-01-16T10:30:33Z</dc:date>
    <item>
      <title>Logs on Grail - efficient maintenance of processing rules using OpenPipeline (or any other methods in Dynatrace)</title>
      <link>https://community.dynatrace.com/t5/Log-Analytics/Logs-on-Grail-efficient-maintenance-of-processing-rules-using/m-p/267607#M1183</link>
      <description>&lt;P&gt;Hi all,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In an organization that has many "general/common" parsing rules for its logs but also some unique ones per log type/business process, How would you build your openPipeline architecture to make sure you only maintain one copy of the general rules and not copy them to each pipeline?&lt;/P&gt;
&lt;P&gt;Gil.&lt;/P&gt;</description>
      <pubDate>Mon, 26 May 2025 08:18:17 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Log-Analytics/Logs-on-Grail-efficient-maintenance-of-processing-rules-using/m-p/267607#M1183</guid>
      <dc:creator>gilgi</dc:creator>
      <dc:date>2025-05-26T08:18:17Z</dc:date>
    </item>
    <item>
      <title>Re: Logs on Grail - efficient maintenance of processing rules using OpenPipeline (or any other methods in Dynatrace)</title>
      <link>https://community.dynatrace.com/t5/Log-Analytics/Logs-on-Grail-efficient-maintenance-of-processing-rules-using/m-p/267613#M1184</link>
      <description>&lt;P&gt;Great question, we hear it quite often. Currently there's no great way to solve this problem, but we are working on a feature that will allow to combine pipelines and add their ingest functionality onto another one.&lt;/P&gt;
&lt;P&gt;This way it's going to be possible to combine functionality that comes Out-of-the-box by Dynatrace with custom processors and it's possible to create global (or widely applicable) pipelines (e.g. for global bucket and security context assignment; or for common technology parsing that is identical across many pipelines) and combine those. In combination with our upcoming fine-grained permissions this will also allow for access and edit control of stages, so that e.g. teams cannot interfere with a global bucket assignment scheme.&lt;/P&gt;</description>
      <pubDate>Thu, 16 Jan 2025 10:30:33 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Log-Analytics/Logs-on-Grail-efficient-maintenance-of-processing-rules-using/m-p/267613#M1184</guid>
      <dc:creator>thomas_billi</dc:creator>
      <dc:date>2025-01-16T10:30:33Z</dc:date>
    </item>
  </channel>
</rss>

