cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Master Node and RHEL Kube Cluster update on Dynatrace Managed

Manosirigosdyn7
Observer

Hello, we are trying to do an OS update from RHEL 7 to RHEL 9 to an openshift dynatrace managed cluster and we need to find a way to safely handle the master node of the cluster during the update. Should we delete it and then create one? Should we make one of the nodes master? If anyone knows what we should do and the steps for it  please enlighten us!

Thanks a lot,

Manos

7 REPLIES 7

Julius_Loman
DynaMight Legend
DynaMight Legend

Can you elaborate more on what you want to achieve and what's your architecture? Dynatrace Managed Cluster Node does not run on OpenShift, but runs on a Linux VM or physical host only. Are you trying to upgrade the OS for Dynatrace Managed Cluster Node? 

Certified Dynatrace Master | Alanata a.s., Slovakia, Dynatrace Master Partner

Yeah my bad, didn't phrase it correctly, we need to do an OS upgrade to RHEL 9.4 on all the dynatrace managed servers. We will make new VMs though so it won't be an in -place update and we also want the new VM's to have the same IP as the old ones. But that's something we will see with the customer. What i'm not sure what to do is how to change the Master node, and probably make a new node the Master node. On that part is when i need some help with. 

Thanks again for the help 🙏

If you don't upgrade the OS in place, you have basically two options described in the cluster migration. As you want to keep the IP and settings, I'd go with the route of restoring the node from backup.

Certified Dynatrace Master | Alanata a.s., Slovakia, Dynatrace Master Partner

Oh you mean the backup or the built-in replication node ways right? I thought that they could help only when you migrate clusters though. So we can use these ways for the Dynatrace cluster OS update too?

Theodore_x86
Organizer

Hello!

Would it be OK to use the same IPs between old and new cluster? For example use the restore script command as followes:

sudo /tmp/backup-001-dynatrace-managed-installer.sh
--restore
--cluster-ip "10.176.41.168"
--cluster-nodes "1:10.176.41.168,3:10.176.41.169,5:10.176.41.170"
--seed-ip "10.176.41.168"
--backup-file /mnt/backup/bckp/c9dd47f0-87d7-445e-bbeb-26429fac06c6/node_1/files/19/backup-001.tar

 

Thanks.

If you have enough time, You could add 1 new RHEL 9.4 cluster node to the cluster each day and turn off one of the old RHEL 7 to disable oneagent traffic so it stops writing to the old node. Keep the old node online until your purepath age out time period, you can see this time period value in your Cluster Management environment(s) screen.  You would repeat this process each day of adding a new cluster node and turning off oneagent on old node until you have all of your new nodes added.  Then at the end of your purepath time period for each of the RHEL 7 hosts, you would do a complete remove of the old node, one per day.

 

Hello Colin.

I think that this is not possible since we want to keep the IPs. Coexistence of two nodes with same IPs will not be accepted by the cluster.

BR

Featured Posts