22 Dec 2023 09:29 AM - edited 11 Jan 2024 09:13 AM
Short gaps in monitoring of one to two minutes is an expected behavior.
When the settings for a Kubernetes cluster change, the old configuration is removed from the ActiveGate, and the system tries to find the most appropriate ActiveGate to monitor the Kubernetes cluster with the new settings. Once the best matching ActiveGate is known, the monitoring with the new configuration starts on the assigned ActiveGate. Depending on the data (different metric types, events, etc.) it may take up to two minutes until monitoring resumes.
We turn off monitoring events.
With the following command, we can see the change in the monitoring state of the ActiveGate involved. (For this example we have one containerized ActiveGate in the monitored Kubernetes cluster.)
watch 'kubectl logs -n dynatrace k8s8459-activegate-0 2> /dev/null | grep -E "Configuration (added|updated|removed)" | tail -8'
We see that at 11:18 UTC, the (old) monitoring configuration is removed and at 11:19 UTC the new configuration becomes active.
In the data explorer, we can see this gap of two minutes for metrics originating from this ActiveGate (Note: the local time is CET, which is UTC+1)