17 Mar 2025 08:04 PM
Hello,
I'm using dynatrace opertaro, using the helm chart version 1.4.0, we did a migration from classic stack to cloudnative, it worked fine but during the migration we added the cluster name as hostgroup, before we had hostgroup=""
Now were facing issues with applications grouped by hostgroup, so I went ahead and removed the hostgroup for the dynakube and restarted all the services, however I still see the hostgroup in dynatrace, and for the process we see this:
```
17 Mar 2025 09:01 PM
Most likely you did not fully restart all the processes (pods) and OneAgent pods for hostgroup to be removed. Anyway, you probably would like to have the hostgroup set (I highly recommend doing so), but you need to set the workload detection rules correctly to match your deployments.
18 Mar 2025 02:35 PM
Interesting, I even removed all the dynatrace resources from the kubernets cluster (AKS) and after install the node keep showing the hostgroup, I'm trying to avoid to restart/recreate the nodes as this affect other applications, I may have some containers running perhaps? do you have any suggestions to try inside the node? I was trying to find the uninstall script in the node but could not find it after we migrate to cloudnative
19 Mar 2025 07:33 AM
Did you try restarting one of the pods AFTER removing the hostgroup?
18 Mar 2025 07:09 AM
Hi @dkroger
as a tested practical solution K8s admin can remove or update the host group assignment from the YAML file and re-deploy it again to ensure all updated configurations are reflected in Dynatrace accurately.
BR,
Peter
18 Mar 2025 02:37 PM
From the yaml file you mean from the dynakube.yaml? I did that I think that is why is showing a mismatch on the hostgroup, because even the init pod that gets deployed with the dev apps it has an empty hostgroup