we are trying to setup alert when heap memory used reached to 90% or 13 GB in last 30 mins, attached all process who i setup incident.
Issue is we are receiving email alert immediate right after we setup the incident and incident is not ended.
i don't know what was missing.
In the incident definition under conditions, you should change the aggregation to average. If it is set to max then it basically ignores that evaluation time frame and throws an alert as soon as the threshold is passed. Something else that you can do is to use minimum. This will check to the last 30 minutes and to see whether the lowest value it found was 90%. If the lowest value that the measure found was 90% or above, it will throw the alert, if there is even one measurement below your threshold (90%) then it will not throw an alert.
My suggestion for this scenario is to use the avg aggregation so that AppMon does the math for you.
Thanks David for Reply!!
My scenario is whenever heap memory used reach 13GB or more than 13GB then send out an alert. so, i gave upper serve value as 13GB.
I changed the aggregation to average still i am receiving email alerts right after incident setup. Only difference is after changing to Avg was we are seeing less alerts and incident is ended.
You can chart the measure you're using using the same aggregation and and by setting the chart resolution to the same as your incident time frame you'll be able to see what the incident is seeing when it triggers and make changes accordingly to get the behavior your want.
This is expected. If in the last 10 seconds the max exceeds your threshold then the max for the last hour also exceeds your threshold. A better alert would likely be 'min' aggregation for the last minute or possibly even the last 5 minutes, as 90% threshold should trigger a garbage collection. Your alert will fire regardless of whether it garbage collects, and if it reaches the 90% threshold multiple times an hour it will sustain the alert until an hour goes by that it does not go that high