cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Looking to upgrade from Dynatrace Managed to SaaS? See how

Dynatrace Managed Cluster Migration

pahofmann
DynaMight Guru
DynaMight Guru

I want to migrate our Dynatrace Managed Cluster ( Currently one node) to a new cloud environment, what is the best approach?

Can I just shut down the old cluster node, and install a new one with the latest backup? Or will there be any issue (e.g. because the public IP changed).

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net
11 REPLIES 11

hayden_miedema
Inactive

Hey Patrick,

I am actually in the process of this with a customer with a bit different requirements/considerations. But, as you said, there would be some issues if you go about with your approach. The main one that comes to mind is that agents are currently reporting to that cluster (old one node) and an environment within that cluster. If you move to new hardware, you would have to procure a new environment, which would lead to having to redeploy all agents.

The best way to do this would actually be to add your newly provisioned node to the existing cluster. At that point, there will be metric replication done from the Cassandra database. Timeseries metrics do not replicate in this process actually. So, depending on what is configured as the transaction storage period, you would want to run this two node cluster for as many days as you have selected there (default is 10 days). This would just be to ensure that you do not have a gap in transaction (code-level) data for any amount of time.

Once this period is done, you could decommission and remove the old node from the cluster.

Let me know if you have questions,

Hayden

Hey Hayden,

thanks for the input, that was my second thought as well. Though a bit more complicated to get the cluster nodes connected over the different environments, it should still be okay.

Patrick

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

Hey Hayden,

which ports should be opened between the different cluster nodes, only 443?

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

Hi Patrick,

Yes, 443 for between nodes. Further, there are some IP addresses which should be accessible over 443 for all nodes to get to Mission Control. Those, as well as the full listing of ports and explanations can be seen here: https://www.dynatrace.com/support/help/dynatrace-managed/dynatrace-server/which-network-ports-does-dynatrace-server-use/

You can also test the cluster's connection to Mission Control in the CMC UI under Settings -> Internet Access. This will test the connection from all nodes.

Hope this helps,

Hayden

Thanks! I'm aware of mission control, I thought there might be some additional ports required for replication etc between nodes. Only 443 was just to easy to believe 😉

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

Keep in mind that only the Master node will connect to MC.

The master node changes all the time and is basically the oldest node with the highest uptime.

pahofmann
DynaMight Guru
DynaMight Guru

I tried the installation but it failed at first. For future reference:

To connect to the existing cluster node, ports 8020/8021 had to be accessible from new to old node.

By default the new node tried to use the internal IP of my old cluster node, which was not accessible. You can set a different IP for the master node with the installation parameter --seed-ip

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

Another issue:

2017-12-12 15:42:26 Preparing firewall on cluster nodes .. failed, error: Adding IP of this machine ("192.168.1.4") to cluster node "192.168.1.68" failed.
2017-12-12 15:42:26 Installation failed, with status: system verified.
Errors occurred:
Cannot prepare firewall on Dynatrace cluster nodes. Error: Adding IP of this machine ("192.168.1.4") to cluster node "192.168.1.68" failed.
2017-12-12 15:42:26 Exit code is 3

I assume it is because the internal IPs, on which the nodes can't reach each other, are used.

I opened a support case and will post the results here.

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

Hi Patrick,

could you tell me what caused the "Adding IP of this machine failed"?
Because I am facing the same problem now.

Best regards,

Anna


kristof_renders
Dynatrace Champion
Dynatrace Champion

Hi Patrick,

Not only 443 is required.

Cassandra needs a bunch of ports opened between the nodes in order for the replication to be possible, as well as ES.

Please check the documentation for an overview of required ports: https://www.dynatrace.com/support/help/dynatrace-...

With regards to previous statement, if you don't want to loose old user sessions, you actually need to wait until the user session retention has passed in order to not lose that data. By default this is 35 days.

Additionally, you would have to actually disable the old node as soon as Cassandra and ES replication has finished - which is basically when the new node has successfully installed and is up and running. If you don't do that, the old node will still accept agent traffic and will also keep on processing and storing code level transactions. A disabled node's data is still available in all the dashboards. You can disable a node in the CMC.

Keeping the data is not top priority, if we loose transaction level data it's no issue at all. The bigger part is that we would have to reinstall all agents if we did a clean install in the new environment.

Thanks for the Port overview, thats what I was looking for, I somehow missed it. All relevant ports are open now, the issue still persists. I assume its because of the private ip adresses being used.

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

Featured Posts