<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: openpipeline best practices? in Open Q&amp;A</title>
    <link>https://community.dynatrace.com/t5/Open-Q-A/OpenPipeline-best-practices/m-p/288439#M37853</link>
    <description>&lt;P&gt;Hi &lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/91250"&gt;@calfanonerey&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;From what I’ve seen in the docs and setups I’ve worked with, the choice really depends on how similar your services are in terms of log and span processing:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;&lt;STRONG&gt;Option (a)&lt;/STRONG&gt; – one dynamic route and one pipeline per service – gives you fine-grained control (different extraction rules, retention, metrics, etc.) but can quickly become hard to maintain at scale.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;&lt;STRONG&gt;Option (b)&lt;/STRONG&gt; – grouping by stackenv – is easier to manage and works well if the services in a stack share similar characteristics.&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Personally, I’d probably go for a &lt;STRONG&gt;hybrid&lt;/STRONG&gt; approach: start with stackenv-level pipelines and split out only the services that need specific processing logic. Automating pipeline creation via the Settings API can help a lot here.&lt;/P&gt;&lt;P&gt;That said, this is just my take based on current documentation and experience — I’d be very interested to hear how others have structured their OpenPipeline deployments and if they found a cleaner pattern!&lt;/P&gt;&lt;P&gt;Warm regards,&lt;/P&gt;&lt;P&gt;Jean&lt;/P&gt;</description>
    <pubDate>Fri, 24 Oct 2025 10:00:49 GMT</pubDate>
    <dc:creator>JeanBlanc</dc:creator>
    <dc:date>2025-10-24T10:00:49Z</dc:date>
    <item>
      <title>OpenPipeline best practices</title>
      <link>https://community.dynatrace.com/t5/Open-Q-A/OpenPipeline-best-practices/m-p/284835#M37410</link>
      <description>&lt;P&gt;Hey Everyone, looking for some advice here.&lt;/P&gt;
&lt;P&gt;I want to build out OpenPipeline for logs and spans and wondering what the best practice is.&amp;nbsp;&lt;BR /&gt;The end goal is to have buckets on a per service level as well as extract metrics from spans/logs and process logs as needed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To organize things would it be best to do&amp;nbsp;&lt;/P&gt;
&lt;P&gt;a. 1 dynamic route per service -&amp;gt; 1 pipeline per service&amp;nbsp;&lt;/P&gt;
&lt;P&gt;b. 1 dynamic route for stackenv -&amp;gt; 1 pipeline per stackenv&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I like option a because it gives us more control but option b seems easier to manage? Looking for advice thanks in advance!&lt;/P&gt;</description>
      <pubDate>Tue, 30 Dec 2025 15:57:17 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Open-Q-A/OpenPipeline-best-practices/m-p/284835#M37410</guid>
      <dc:creator>calfanonerey</dc:creator>
      <dc:date>2025-12-30T15:57:17Z</dc:date>
    </item>
    <item>
      <title>Re: openpipeline best practices?</title>
      <link>https://community.dynatrace.com/t5/Open-Q-A/OpenPipeline-best-practices/m-p/288439#M37853</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/91250"&gt;@calfanonerey&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;From what I’ve seen in the docs and setups I’ve worked with, the choice really depends on how similar your services are in terms of log and span processing:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;&lt;STRONG&gt;Option (a)&lt;/STRONG&gt; – one dynamic route and one pipeline per service – gives you fine-grained control (different extraction rules, retention, metrics, etc.) but can quickly become hard to maintain at scale.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;&lt;STRONG&gt;Option (b)&lt;/STRONG&gt; – grouping by stackenv – is easier to manage and works well if the services in a stack share similar characteristics.&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Personally, I’d probably go for a &lt;STRONG&gt;hybrid&lt;/STRONG&gt; approach: start with stackenv-level pipelines and split out only the services that need specific processing logic. Automating pipeline creation via the Settings API can help a lot here.&lt;/P&gt;&lt;P&gt;That said, this is just my take based on current documentation and experience — I’d be very interested to hear how others have structured their OpenPipeline deployments and if they found a cleaner pattern!&lt;/P&gt;&lt;P&gt;Warm regards,&lt;/P&gt;&lt;P&gt;Jean&lt;/P&gt;</description>
      <pubDate>Fri, 24 Oct 2025 10:00:49 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Open-Q-A/OpenPipeline-best-practices/m-p/288439#M37853</guid>
      <dc:creator>JeanBlanc</dc:creator>
      <dc:date>2025-10-24T10:00:49Z</dc:date>
    </item>
  </channel>
</rss>

