<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Deployment for Dynatrace Managed in a fail-over setup at 2 datacenter setup in Dynatrace Managed Q&amp;A</title>
    <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192139#M2033</link>
    <description>&lt;P&gt;Thanks for clarifying that.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I was thinking that a 3 node cluster loosing 1 node, was a similar situation than a cluster built with only 2 nodes (with the split-brain issue), which is not the case.&lt;/P&gt;</description>
    <pubDate>Fri, 05 Aug 2022 13:29:36 GMT</pubDate>
    <dc:creator>EduardLaGrange</dc:creator>
    <dc:date>2022-08-05T13:29:36Z</dc:date>
    <item>
      <title>Deployment for Dynatrace Managed in a fail-over setup at 2 datacenter setup</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192028#M2028</link>
      <description>&lt;P&gt;Hi I have read the documentation (&lt;A href="https://www.dynatrace.com/support/help/setup-and-configuration/dynatrace-managed/basic-concepts/dynatrace-managed-cluster-failover-mechanism" target="_blank" rel="noopener"&gt;https://www.dynatrace.com/support/help/setup-and-configuration/dynatrace-managed/basic-concepts/dynatrace-managed-cluster-failover-mechanism&lt;/A&gt;) for standard high-availability setup and I am trying to figure out the best way to provide fail-over for our small cluster deployed over 2 datacenters.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We currently have a 3 node cluster split over 2 datacenters.&amp;nbsp; Presumable if the DC with the single node goes ("DC2") down we should be OK to continue processing with the 2 nodes in "DC1".&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However if "DC1" goes down we are basically dead - leaving a single node.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Adding another node to "DC2" does not improve the situation from a redundancy perspective.&amp;nbsp; Adding a node in each DC ("DC1" = 3 nodes, "DC2" = 2 nodes) still leaves us vulnerable if "DC1" goes down and also breaks the rule/guidance in the documentation - "&lt;SPAN&gt;If you plan to distribute nodes in separate data centers, you shouldn't deploy more than two nodes in each data center."&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;What are our options here if I want to survive the loss of either "DC1" or "DC2"?&amp;nbsp; Is Premium HA our only option?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Mar 2023 11:12:47 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192028#M2028</guid>
      <dc:creator>EduardLaGrange</dc:creator>
      <dc:date>2023-03-23T11:12:47Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment for Dynatrace Managed in a fail-over setup at 2 datacenter setup</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192089#M2029</link>
      <description>&lt;P&gt;Thanks,&amp;nbsp;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/43655"&gt;@EduardLaGrange&lt;/a&gt;&amp;nbsp;for this question. It's a good one and asked many times.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The way you have it currently with cluster nodes split into 2 datacenters is actually doing more harm than good. This is because, there's network latency between DCs and a higher risk of network issues between them, causing a split-brain situation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The solution you pick should depend on the needs you have. For example, measured by Recovery Point Objective (RPO) and Recovery Time Objective (RTO). Possible solutions for you:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) Use just one data center and leverage backup-restore capability to recover to the 2nd data center in case of a disaster. This solution has medium RTO (about 1h - depending on the data size) and medium RPO (24h).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2) Use just one data center and leverage &lt;A href="https://github.com/dynatrace-oss/dynatrace-monitoring-as-code" target="_self"&gt;Monitoring as a code&lt;/A&gt;&amp;nbsp;to backup configuration. In the 2nd data center keep infrastructure and a different cluster installation to quickly redeploy the configuration in case of a disaster in 1st data center. This solution has high RPO (as your monitored data is lost, only&amp;nbsp;the configuration is persisted) and has low RTO.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;3) Use &lt;A href="https://www.dynatrace.com/support/help/shortlink/managed-multi-data-center" target="_blank" rel="noopener"&gt;Premium High Availability&lt;/A&gt; to replicate the data between data centers. This solution has the lowest RTO and RPO.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;...&amp;nbsp; Or you add a third data center and then there are more solutions.&lt;/P&gt;</description>
      <pubDate>Thu, 11 Aug 2022 09:21:14 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192089#M2029</guid>
      <dc:creator>Radoslaw_Szulgo</dc:creator>
      <dc:date>2022-08-11T09:21:14Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment for Dynatrace Managed in a fail-over setup at 2 datacenter setup</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192100#M2030</link>
      <description>&lt;P&gt;I agree with&amp;nbsp;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/25371"&gt;@Radoslaw_Szulgo&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/43655"&gt;@EduardLaGrange&lt;/a&gt;&amp;nbsp; Kindly go ahead adding nodes in 3rd data center. Try keeping one Data center active at a time. Also with some admin expert try replicating data across all data centers.&lt;/P&gt;&lt;P&gt;You can explore the other suggestions posted by Radoslaw too.&lt;/P&gt;</description>
      <pubDate>Fri, 05 Aug 2022 11:36:27 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192100#M2030</guid>
      <dc:creator>techean</dc:creator>
      <dc:date>2022-08-05T11:36:27Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment for Dynatrace Managed in a fail-over setup at 2 datacenter setup</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192131#M2031</link>
      <description>&lt;P&gt;Thanks &lt;SPAN&gt;Radoslaw&amp;nbsp;&lt;/SPAN&gt;for the reply.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;One more question here ...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;if we loose one node in our 3 node cluster (all nodes in the same DC) are we are still vulnerable to "split-brain" (but less so than having nodes split between 2 DCs)?&amp;nbsp; I am asking this in the light of the documentation stating that in a 3 node cluster we can survive the loss of one node.&amp;nbsp;&amp;nbsp;Is there a practical time-limit that one can run with only 2 nodes?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 05 Aug 2022 12:13:24 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192131#M2031</guid>
      <dc:creator>EduardLaGrange</dc:creator>
      <dc:date>2022-08-05T12:13:24Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment for Dynatrace Managed in a fail-over setup at 2 datacenter setup</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192136#M2032</link>
      <description>&lt;P&gt;If you have 3 nodes - you have 3 copies of the data (excluding log events - where there are 2 copies). So when 1 node is lost - you have 2 remaining copies - that constitute a majority and the state of the data is consistent. So no split-brain situation here - these two should have the same state. In such situation, Cassandra writes to a file "hints" - which are the data updates that should be stored to the node that is down. This is stored in a sliding 3h window. After that time, when a node is back - the data needs to be repaired (resynchronized) on the node that comes back. If it comes back quicker - then hints are loaded to the node.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If another node is lost - then you're left with 1 copy of the data. So no data is lost yet - however, it may not be up-to-date and needs a repair/boostrap.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In case you have only 2 nodes - then each of those two can have a different state of data. That's why we call that situation a split-brain. Each part "thinks" its data Is the right one.&lt;/P&gt;</description>
      <pubDate>Fri, 05 Aug 2022 13:13:45 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192136#M2032</guid>
      <dc:creator>Radoslaw_Szulgo</dc:creator>
      <dc:date>2022-08-05T13:13:45Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment for Dynatrace Managed in a fail-over setup at 2 datacenter setup</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192139#M2033</link>
      <description>&lt;P&gt;Thanks for clarifying that.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I was thinking that a 3 node cluster loosing 1 node, was a similar situation than a cluster built with only 2 nodes (with the split-brain issue), which is not the case.&lt;/P&gt;</description>
      <pubDate>Fri, 05 Aug 2022 13:29:36 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192139#M2033</guid>
      <dc:creator>EduardLaGrange</dc:creator>
      <dc:date>2022-08-05T13:29:36Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment for Dynatrace Managed in a fail-over setup at 2 datacenter setup</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192140#M2034</link>
      <description>&lt;P&gt;Thanks for clarifying that.&lt;/P&gt;&lt;P&gt;I was thinking that a 3 node cluster loosing 1 node, was a similar situation than a cluster built with only 2 nodes (with the split-brain issue), which is not the case.&lt;/P&gt;</description>
      <pubDate>Fri, 05 Aug 2022 13:31:19 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/192140#M2034</guid>
      <dc:creator>EduardLaGrange</dc:creator>
      <dc:date>2022-08-05T13:31:19Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment for Dynatrace Managed in a fail-over setup at 2 datacenter setup</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/265481#M3974</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;SPAN&gt;&amp;nbsp;Radoslaw,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;We have Premium HA with 11 nodes in each cluster.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;So how many copies of data will be there ?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Regards,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Jalpesh Shelar&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Dec 2024 11:23:26 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Deployment-for-Dynatrace-Managed-in-a-fail-over-setup-at-2/m-p/265481#M3974</guid>
      <dc:creator>jalpeshs</dc:creator>
      <dc:date>2024-12-17T11:23:26Z</dc:date>
    </item>
  </channel>
</rss>

