I have a monitor that runs from 7 locations. We have configured it send out an alert if 1/7 locations reports a failure. But there is a delay in sending the email notification. Though the problem is created immediately I see a delay of about 7-10 mins for notification to hit my mailbox. Please help.
It’s rather problem with your SMTP Server/ mailbox than Dynatrace. In out Environments (SaaS nad managed) we see e-mail grę seconds after problem is raised in DT.
When you go to integrations in settings where email is configured and you click on send test notification, it shows up immediately in mailbox?
The delay may be caused from the consolidation process across nodes that Dynatrace uses to avoid falsely alerting on partial data. I believe an RFE was submitted for this so things may be in the works to speed up the processing time.
You can find more information on this here:
Hope this helps!
I agree with Justin. We also see the same issue where the alert profile is set to alert immediately. In the event at 8am, Host A has 100% cpu from the normal 2% usage. Within 5 mins, the problem is shown up within the UI as a problem card, then the alert profile kicks off as immediately (Once detected) and sent out via the integration (IE: Email). At this point there isnt much we can do as its part of the DavisAI.
There is an RFE for improving this:
While in general reducing alerting delays comes at a cost of reduced confidence I can think of many examples where mininmizing alerting delay is of much bigger importance than trying to achieve high(er) confidence and there are even cases when just a single metric datapoint breaching a threshold will provide more than enough confidence to be actionable.