14 Nov 2025 09:02 AM
Hi all,
Goal
I want a Davis anomaly detector for disk usage % (e.g., alert when dt.host.disk.used.percent is high), and I’d like the problem to also show used and available capacity (in GiB) for the affected disk.
The issue
When I attach “used/avail” as scalar context via a join in the detector DQL, Dynatrace starts opening new problems whenever those values change (e.g., ±1 GiB) — even though it’s the same host/disk and the same underlying breach.
Here’s the DQL I ended up with (simplified):
timeseries {
value = avg(dt.host.disk.used.percent)
}, by:{ `dt.entity.host_group`, `dt.entity.host`, `dt.entity.disk`, dt.cost.costcenter }
| fieldsAdd
host_group_name = entityName(`dt.entity.host_group`),
host_name = entityName(`dt.entity.host`),
disk_name = entityName(`dt.entity.disk`)
| filter disk_name != "/var/log/audit"
| join [
timeseries {
used_ts = avg(dt.host.disk.used),
avail_ts = avg(dt.host.disk.avail)
}, by:{ `dt.entity.disk` }
| fields
`dt.entity.disk`,
used_last_bytes = arrayLast(arrayRemoveNulls(used_ts)),
avail_last_bytes = arrayLast(arrayRemoveNulls(avail_ts))
],
on:{ `dt.entity.disk` }, kind: leftOuter
// convert to GiB
| fieldsAdd
used_peak_GiB = right.used_last_bytes / 1073741824.0,
avail_min_GiB = right.avail_last_bytes / 1073741824.0
| fieldsAdd used_GiB = round(used_peak_GiB, decimals:0),
avail_GiB = round(avail_min_GiB, decimals:0)
| fieldsAdd total_GiB = used_GiB + avail_GiB
| fieldsRemove right.used_last_bytes, right.avail_last_bytes, used_peak_GiB, avail_min_GiB
Observed behavior
Every time used_GB / avail_GB changes, the detector starts a new problem (instead of keeping one open), which creates noise.
Is there a supported/best-practice way to include dynamic context (used/avail) in the problem without causing Davis to treat it like a “new series” and open a new problem?
I can imagine that it can be done using workflows. But I would like to know if it is possbile to achive using just a Davis Anomaly Detector.
Thanks in advance
Patryk Ozimek