cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Kubernetes Cluster migration

mkkr
Newcomer

Hi,

We are migrating applications/services from one Kubernetes cluster to a new Kubernetes cluster (both in GKE). What is the approach when migration applications?


We are using dynakube.yaml and kubectl to instrument our Kubernetes Clusters.
We have done the migration on our test environments, but services are now recreated, rather than reused. Which result in service settings(such as request naming) and "Multidimensional analysis views" is lost and historic service data only exists in the old service view.

 

Services are not mergeable, and we have many services, so we want to avoid manual process for each service.

What I’ve have tried, without success:
- Use the same host group
- Use the same tags

- Use the same network zone

- Use the same name for the new Dynatrace Kubernetes Cluster
- Tried to change the Kubernetes cluster UUID in the existing Dynatrace Kubernetes Cluster settings, but this is not allowed.

I’ve managed to get a service to be linked to multiple Kubernetes clusters in the Dynatrace service view, but only to the same actual Kubernetes Cluster (UUID), just with multiple new Dynatrace Kubernetes Cluster entities. Which might could indicate, that a service in theory might be able exists in different Kubernetes Clusters?

I guess that services/process groups gets split up per Kubernetes Cluster UUIDs, which is why services are recreated, but is there any way around this?

Best regards
Mikkel Kristensen

8 REPLIES 8

pahofmann
DynaMight Guru
DynaMight Guru

Services can't span across Process Groups. So if you want continuity there, you have to make sure the new processes land in the same PG.

 

By default dynatrace separates PG by workload in k8s. You can use Workload detection rules to group them based on different metadata.

Dynatrace Certified Master, AppMon Certified Master - Dynatrace Partner - 360Performance.net

I've tried Workload detection rules as well, but without success.

Process groups are still separated by Kubernetes clusters UUIDs.

Have you tried an Advanced Detection Rule with Standalone rule enabled? That should do the trick.

 

If all else fails you can always use DT_CLUSTER_ID and DT_NODE_ID environment variables, they will ignore anything else. But i'd only use this as an last ditch effort on k8s..

Dynatrace Certified Master, AppMon Certified Master - Dynatrace Partner - 360Performance.net

I haven't tried that, I'll give it a go.

Thanks for the help.

techean
Dynatrace Pro
Dynatrace Pro

This can be experimental but read about this product call velero which is used for migrating work loads between kn8s clusters.

KG

I don't think the problem is with the migration itself, but with the non consistent data in dynatrace after migrating workloads.

Dynatrace Certified Master, AppMon Certified Master - Dynatrace Partner - 360Performance.net

Oh, in this case it would need complete downtime before the actual migration here than.

KG

Not really. You can do a live migration, the new pods just need to match to the same process group.

Dynatrace Certified Master, AppMon Certified Master - Dynatrace Partner - 360Performance.net