<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Dynatrace Managed Cluster Migration in Dynatrace Managed Q&amp;A</title>
    <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45121#M556</link>
    <description>&lt;P&gt;Hi Patrick,&lt;/P&gt;&lt;P&gt;Yes, 443 for between nodes. Further, there are some IP addresses which should be accessible over 443 for all nodes to get to Mission Control. Those, as well as the full listing of ports and explanations can be seen here: https://www.dynatrace.com/support/help/dynatrace-managed/dynatrace-server/which-network-ports-does-dynatrace-server-use/&lt;/P&gt;&lt;P&gt;You can also test the cluster's connection to Mission Control in the CMC UI under Settings -&amp;gt; Internet Access. This will test the connection from all nodes.&lt;/P&gt;&lt;P&gt;Hope this helps,&lt;/P&gt;&lt;P&gt;Hayden&lt;/P&gt;</description>
    <pubDate>Mon, 11 Dec 2017 18:54:16 GMT</pubDate>
    <dc:creator>hayden_miedema</dc:creator>
    <dc:date>2017-12-11T18:54:16Z</dc:date>
    <item>
      <title>Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45117#M552</link>
      <description>&lt;P&gt;I want to migrate our Dynatrace Managed Cluster ( Currently one node) to a new cloud environment, what is the best approach?&lt;/P&gt;&lt;P&gt;Can I just shut down the old cluster node, and install a new one with the latest backup? Or will there be any issue (e.g. because the public IP changed).&lt;/P&gt;</description>
      <pubDate>Tue, 05 Dec 2017 20:45:00 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45117#M552</guid>
      <dc:creator>pahofmann</dc:creator>
      <dc:date>2017-12-05T20:45:00Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45118#M553</link>
      <description>&lt;P&gt;Hey Patrick,&lt;/P&gt;&lt;P&gt;I am actually in the process of this with a customer with a bit different requirements/considerations. But, as you said, there would be some issues if you go about with your approach. The main one that comes to mind is that agents are currently reporting to that cluster (old one node) and an environment within that cluster. If you move to new hardware, you would have to procure a new environment, which would lead to having to redeploy all agents.&lt;/P&gt;&lt;P&gt;The best way to do this would actually be to add your newly provisioned node to the existing cluster. At that point, there will be metric replication done from the Cassandra database. Timeseries metrics do not replicate in this process actually. So, depending on what is configured as the transaction storage period, you would want to run this two node cluster for as many days as you have selected there (default is 10 days). This would just be to ensure that you do not have a gap in transaction (code-level) data for any amount of time. &lt;/P&gt;&lt;P&gt;Once this period is done, you could decommission and remove the old node from the cluster.&lt;/P&gt;&lt;P&gt;Let me know if you have questions,&lt;/P&gt;&lt;P&gt;Hayden&lt;/P&gt;</description>
      <pubDate>Tue, 05 Dec 2017 21:05:00 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45118#M553</guid>
      <dc:creator>hayden_miedema</dc:creator>
      <dc:date>2017-12-05T21:05:00Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45119#M554</link>
      <description>&lt;P&gt;Hey Hayden,&lt;/P&gt;&lt;P&gt;thanks for the input, that was my second thought as well. Though a bit more complicated to get the cluster nodes connected over the different environments, it should still be okay.&lt;/P&gt;&lt;P&gt;Patrick&lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 08:51:01 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45119#M554</guid>
      <dc:creator>pahofmann</dc:creator>
      <dc:date>2017-12-06T08:51:01Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45120#M555</link>
      <description>&lt;P&gt;Hey Hayden,&lt;/P&gt;&lt;P&gt;which ports should be opened between the different cluster nodes, only 443?&lt;/P&gt;</description>
      <pubDate>Mon, 11 Dec 2017 16:50:26 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45120#M555</guid>
      <dc:creator>pahofmann</dc:creator>
      <dc:date>2017-12-11T16:50:26Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45121#M556</link>
      <description>&lt;P&gt;Hi Patrick,&lt;/P&gt;&lt;P&gt;Yes, 443 for between nodes. Further, there are some IP addresses which should be accessible over 443 for all nodes to get to Mission Control. Those, as well as the full listing of ports and explanations can be seen here: https://www.dynatrace.com/support/help/dynatrace-managed/dynatrace-server/which-network-ports-does-dynatrace-server-use/&lt;/P&gt;&lt;P&gt;You can also test the cluster's connection to Mission Control in the CMC UI under Settings -&amp;gt; Internet Access. This will test the connection from all nodes.&lt;/P&gt;&lt;P&gt;Hope this helps,&lt;/P&gt;&lt;P&gt;Hayden&lt;/P&gt;</description>
      <pubDate>Mon, 11 Dec 2017 18:54:16 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45121#M556</guid>
      <dc:creator>hayden_miedema</dc:creator>
      <dc:date>2017-12-11T18:54:16Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45122#M557</link>
      <description>&lt;P&gt;Thanks! I'm aware of mission control, I thought there might be some additional ports required for replication etc between nodes. Only 443 was just to easy to believe &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 11 Dec 2017 19:00:34 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45122#M557</guid>
      <dc:creator>pahofmann</dc:creator>
      <dc:date>2017-12-11T19:00:34Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45123#M558</link>
      <description>&lt;P&gt;I tried the installation but it failed at first. For future reference: &lt;/P&gt;&lt;P&gt;To connect to the existing cluster node, ports 8020/8021 had to be accessible from new to old node.&lt;/P&gt;&lt;P&gt;By default the new node tried to use the internal IP of my old cluster node, which was not accessible. You can set a different IP for the master node with the installation parameter &lt;EM&gt;--seed-ip&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 12 Dec 2017 13:19:17 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45123#M558</guid>
      <dc:creator>pahofmann</dc:creator>
      <dc:date>2017-12-12T13:19:17Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45124#M559</link>
      <description>&lt;P&gt;Another issue:&lt;/P&gt;&lt;P&gt;2017-12-12 15:42:26 Preparing firewall on cluster nodes .. failed, error: Adding IP of this machine ("192.168.1.4") to cluster node "192.168.1.68" failed.&lt;BR /&gt;2017-12-12 15:42:26 Installation failed, with status: system verified.&lt;BR /&gt;Errors occurred:&lt;BR /&gt;Cannot prepare firewall on Dynatrace cluster nodes. Error: Adding IP of this machine ("192.168.1.4") to cluster node "192.168.1.68" failed.&lt;BR /&gt;2017-12-12 15:42:26 Exit code is 3&lt;/P&gt;&lt;P&gt;I assume it is because the internal IPs, on which the nodes can't reach each other, are used.&lt;/P&gt;&lt;P&gt;I opened a support case and will post the results here.&lt;/P&gt;</description>
      <pubDate>Tue, 12 Dec 2017 15:59:40 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45124#M559</guid>
      <dc:creator>pahofmann</dc:creator>
      <dc:date>2017-12-12T15:59:40Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45125#M560</link>
      <description>&lt;P&gt;Hi Patrick,&lt;/P&gt;
&lt;P&gt;Not only 443 is required.&lt;/P&gt;
&lt;P&gt;Cassandra needs a bunch of ports opened between the nodes in order for the replication to be possible, as well as ES.&lt;/P&gt;
&lt;P&gt;Please check the documentation for an overview of required ports: &lt;A href="https://docs.dynatrace.com/managed/shortlink/managed-network-ports" target="_self"&gt;Cluster node ports&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;With regards to previous statement, if you don't want to loose old user sessions, you actually need to wait until the user session retention has passed in order to not lose that data. By default this is 35 days.&lt;/P&gt;
&lt;P&gt;Additionally, you would have to actually disable the old node as soon as Cassandra and ES replication has finished - which is basically when the new node has successfully installed and is up and running. If you don't do that, the old node will still accept agent traffic and will also keep on processing and storing code level transactions. A disabled node's data is still available in all the dashboards. You can disable a node in the CMC.&lt;/P&gt;</description>
      <pubDate>Wed, 12 Mar 2025 12:20:38 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45125#M560</guid>
      <dc:creator>kristof_renders</dc:creator>
      <dc:date>2025-03-12T12:20:38Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45126#M561</link>
      <description>&lt;P&gt;Keep in mind that only the Master node will connect to MC.&lt;/P&gt;&lt;P&gt;The master node changes all the time and is basically the oldest node with the highest uptime.&lt;/P&gt;</description>
      <pubDate>Wed, 13 Dec 2017 08:38:59 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45126#M561</guid>
      <dc:creator>kristof_renders</dc:creator>
      <dc:date>2017-12-13T08:38:59Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45127#M562</link>
      <description>&lt;P&gt;Keeping the data is not top priority, if we loose transaction level data it's no issue at all. The bigger part is that we would have to reinstall all agents if we did a clean install in the new environment.&lt;/P&gt;&lt;P&gt;Thanks for the Port overview, thats what I was looking for, I somehow missed it. All relevant ports are open now, the issue still persists. I assume its because of the private ip adresses being used.&lt;/P&gt;</description>
      <pubDate>Wed, 13 Dec 2017 13:58:39 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45127#M562</guid>
      <dc:creator>pahofmann</dc:creator>
      <dc:date>2017-12-13T13:58:39Z</dc:date>
    </item>
    <item>
      <title>Re: Dynatrace Managed Cluster Migration</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45128#M563</link>
      <description>&lt;P&gt;Hi Patrick,&lt;/P&gt;&lt;P&gt;could you tell me what caused the "Adding IP of this machine failed"?&lt;BR /&gt;Because I am facing the same problem now.&lt;/P&gt;&lt;P&gt;Best regards,&lt;/P&gt;&lt;P&gt;Anna&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 08 Apr 2019 11:28:38 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-Managed-Q-A/Dynatrace-Managed-Cluster-Migration/m-p/45128#M563</guid>
      <dc:creator>kaefferlein</dc:creator>
      <dc:date>2019-04-08T11:28:38Z</dc:date>
    </item>
  </channel>
</rss>

