10 Sep 201806:42 PM - last edited on 19 May 202103:40 PM by MaciejNeumann
We have a Kubernetes cluster we use for development and integration, which causes containers to be torn down and replaced by updated pods. This results in our vanilla install of Dynatrace Managed to push a lot of TCP availability and similar alerts for these pods. Is there any specific best practice for configuring around this issue?
There is the option to define a minimum instance threshold for process groups. Just go to the process group settings page and define a minimum threshold of running instances that are necessary where you would like to be alerted on.
I fear, the other issues concerning failing network connections and failing service requests have to be solved in terms of a graceful load balancer switch.