We are migrating applications/services from one Kubernetes cluster to a new Kubernetes cluster (both in GKE). What is the approach when migration applications?
We are using dynakube.yaml and kubectl to instrument our Kubernetes Clusters.
We have done the migration on our test environments, but services are now recreated, rather than reused. Which result in service settings(such as request naming) and "Multidimensional analysis views" is lost and historic service data only exists in the old service view.
Services are not mergeable, and we have many services, so we want to avoid manual process for each service.
What I’ve have tried, without success:
- Use the same host group
- Use the same tags
- Use the same network zone
- Use the same name for the new Dynatrace Kubernetes Cluster
- Tried to change the Kubernetes cluster UUID in the existing Dynatrace Kubernetes Cluster settings, but this is not allowed.
I’ve managed to get a service to be linked to multiple Kubernetes clusters in the Dynatrace service view, but only to the same actual Kubernetes Cluster (UUID), just with multiple new Dynatrace Kubernetes Cluster entities. Which might could indicate, that a service in theory might be able exists in different Kubernetes Clusters?
I guess that services/process groups gets split up per Kubernetes Cluster UUIDs, which is why services are recreated, but is there any way around this?
Services can't span across Process Groups. So if you want continuity there, you have to make sure the new processes land in the same PG.
By default dynatrace separates PG by workload in k8s. You can use Workload detection rules to group them based on different metadata.
Have you tried an Advanced Detection Rule with Standalone rule enabled? That should do the trick.
If all else fails you can always use DT_CLUSTER_ID and DT_NODE_ID environment variables, they will ignore anything else. But i'd only use this as an last ditch effort on k8s..