04 Feb 2026 07:39 AM - edited 04 Feb 2026 07:40 AM
In our org, the limits were reached as we are using metric selector for our alerts.
we use success rate query which is like a percentage. So any other ways we can achieve the same using metric-key, but there we cannot calculate the percentage, if we start to use seasonal baseline it will also hit limits easily as we have a greater number of teams.
builtin:service.errors.server.successCount:names:filter(in("dt.entity.service",entitySelector("type(~"SERVICE~")"))):splitBy("dt.entity.service.name") / (builtin:service.requestCount.server:names:filter(in("dt.entity.service",entitySelector("type(~"SERVICE~")"))):filter(series(sum,gt(50))):splitBy("dt.entity.service.name")) * 100
04 Feb 2026 09:13 AM
Hi,
If your org has already hit the alert limits due to using metric selectors (especially for calculated success % and with a large number of teams/services), then the most pragmatic next step is to contact Dynatrace Support and open a ticket.
It’s highly likely Dynatrace can review your specific use case and adjust tenant/cluster-side settings — for example increase certain limits or recommend an alternative configuration so you don’t immediately hit limits again as you scale (seasonal baselines + high splitBy cardinality typically consumes limits very quickly).
04 Feb 2026 04:52 PM
Hi,
Another alternative could be using Davis Anomaly Detector instead, if you are in SaaS. That limit is only for classic metric events.
Best regards
Featured Posts