cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Kubernetes Cluster migration

mkkr
Visitor

Hi,

We are migrating applications/services from one Kubernetes cluster to a new Kubernetes cluster (both in GKE). What is the approach when migration applications?


We are using dynakube.yaml and kubectl to instrument our Kubernetes Clusters.
We have done the migration on our test environments, but services are now recreated, rather than reused, which results in service settings (such as request naming) and "Multidimensional analysis views" being lost and historic service data only existing in the old service view.

 

Services are not mergeable, and we have many services, so we want to avoid manual process for each service.

What I’ve have tried, without success:
- Use the same host group
- Use the same tags

- Use the same network zone

- Use the same name for the new Dynatrace Kubernetes Cluster
- Tried to change the Kubernetes cluster UUID in the existing Dynatrace Kubernetes Cluster settings, but this is not allowed.

I’ve managed to get a service to be linked to multiple Kubernetes clusters in the Dynatrace service view, but only to the same actual Kubernetes Cluster (UUID), just with multiple new Dynatrace Kubernetes Cluster entities. Which could indicate that a service in theory may exist in different Kubernetes Clusters?

I guess that services/process groups get split up per Kubernetes Cluster UUIDs, which is why services are recreated, but is there any way around this?

Best regards
Mikkel Kristensen

12 REPLIES 12

pahofmann
DynaMight Guru
DynaMight Guru

Services can't span across Process Groups. So if you want continuity there, you have to make sure the new processes land in the same PG.

 

By default dynatrace separates PG by workload in k8s. You can use Workload detection rules to group them based on different metadata.

Dynatrace Certified Master, AppMon Certified Master - Dynatrace Partner - 360Performance.net

I've tried Workload detection rules as well, but without success.

Process groups are still separated by Kubernetes clusters UUIDs.

Have you tried an Advanced Detection Rule with Standalone rule enabled? That should do the trick.

 

If all else fails you can always use DT_CLUSTER_ID and DT_NODE_ID environment variables, they will ignore anything else. But i'd only use this as an last ditch effort on k8s..

Dynatrace Certified Master, AppMon Certified Master - Dynatrace Partner - 360Performance.net

I haven't tried that, I'll give it a go.

Thanks for the help.

stefan_penner
Dynatrace Enthusiast
Dynatrace Enthusiast

Hi Mikkel, 

could provide some more details about the setup you tried with workload detection rules for k8s?

  • Which rules did you create?
    I assume you did not include basePodName in the rule
  • Which cluster/agent version did you use? Did the agent versions match between the two k8s clusters?
  • Did you restart your pods in order to get those rules applied?
  • Did you have any host-groups in parallel?

 

Thanks, 
Stefan

techean
Dynatrace Champion
Dynatrace Champion

This can be experimental but read about this product call velero which is used for migrating work loads between kn8s clusters.

KG

I don't think the problem is with the migration itself, but with the non consistent data in dynatrace after migrating workloads.

Dynatrace Certified Master, AppMon Certified Master - Dynatrace Partner - 360Performance.net

techean
Dynatrace Champion
Dynatrace Champion

Oh, in this case it would need complete downtime before the actual migration here than.

KG

Not really. You can do a live migration, the new pods just need to match to the same process group.

Dynatrace Certified Master, AppMon Certified Master - Dynatrace Partner - 360Performance.net

techean
Dynatrace Champion
Dynatrace Champion

Oh.. Should replicate this use case environment and try testing it ..I'll Keep you posted.

Meanwhile @mkkr  were you able to close this?

KG

mkkr
Visitor

Just to follow up

We ended up cutting losses and accepting the extra work of setting the service settings and view up again, as we had a hard deadline for the practical migration..

I don’t think we’ll look any further at this for now as the migration is complete and as it is very time consuming to be black-box testing this in the first place. 
This thread was a last try as the chat support said it was impossible, and it was created a bit late in the process, so we ended up not having that much time to test out your suggestions, unfortunately

Anyhow, thanks for the help 😊

stefan_penner
Dynatrace Enthusiast
Dynatrace Enthusiast

Hi Mikkel, 

sorry to hear that you weren't able to try/use the suggested approaches due to time restrictions. 

I just want to use this thread to document how it would work here as well in case someone faces similar requirements:
As mentioned by @pahofmann you can use Workload detection rules to group Process Groups and Services based on different metadata. In order to merge workloads across clusters into a single service within Dynatrace, it is crucial to define a workload detection rule which utilizes "Product" (assuming Stage and container name are not sufficient/unique but can be optionally combined with Product). I've attached a sample which would work across clusters, assuming you just want to apply it for one specific namespace (in this case: "merge-demo"). Hence, you can also do a migration on a per namespace basis instead of a big-bang.

KR, 
Stefan