10 Sep 2021 05:08 PM
Every Tuesday - Friday, a batch process triggers around 5:30am and runs for roughly 30 minutes. This generates a large throughput and response times increase during the run.
Comparing today to every other day, it looks about the same:
Every morning between 5:35 - 6:00 Dynatrace generates the same response time degradation problem. I chatted with Dynatrace support, and they said that it's because the response time varies from problem to problem. One day might be 1.2s, another day might be 5s.
A maintenance window was suggested, but we want to be alerted if there are any failure rate increases or major response time differences during this time.
This seems like a perfect use case for the Dynatrace AI to automatically handle the alert. At least detect it as a "Frequent Issue", right?
Solved! Go to Solution.
10 Sep 2021 06:29 PM
Hi,
Can you share 2 problems? is the degradation for a single request? maybe it's worth changing the anomaly detection for this service/request to be less sensitive?
10 Sep 2021 06:49 PM
I would have to side with the maintenance window for that small time frame especially if its not really being factored in as a frequent issue.
10 Sep 2021 06:57 PM
Problem with that approach is that doing so would need to disable monitoring during business hours for 3-4 critical services. It's one thing to disable only slowdown problems for that time, but the risk is too great to disable both slowdown and error problems.
10 Sep 2021 07:02 PM
well you dont have to disable the monitoring, you can just set it to not page out an alert during that time. Ad you can isolate it to just that one service. OR set a custom event for alerting and curb the baseline or go with static settings.