01 Oct 2025 09:53 PM
I am using the built in:
builtin:host.cpu.usage:splitBy("dt.entity.host"):sort(value(auto,descending)):limit(20) which give me when the CPU goes to 100% but I want to know the duration of the event.
Solved! Go to Solution.
02 Oct 2025 01:04 AM
@Taz Not sure you would be able to query the duration of event from Data explorer , but should be easy with Notebooks if you are on SaaS
fetch dt.davis.problems
| filter host.name == array("XYZ")
| filter event.name == "CPU saturation"
| fields host.name, event.name, resolved_problem_duration, display_id
10 Oct 2025 11:12 AM
If you start directly from the metric builtin:host.cpu.usage, you can’t really get the duration of an episode (bursts > X%) in Data Explorer. Data Explorer evaluates values at individual time points, not continuous segments where a condition is true.
As @p_devulapalli write you have two practical approaches:
Active (how long it has been open so far):
fetch dt.davis.problems
| filter event.kind == "DAVIS_PROBLEM"
| filter event.status == "ACTIVE"
| filter event.name == "CPU saturation"
| fields host.name, display_id, event.status, open_problem_durationClosed (how long they lasted):
fetch dt.davis.problems
| filter event.kind == "DAVIS_PROBLEM"
| filter event.status == "CLOSED"
| filter event.name == "CPU saturation"
| fields host.name, display_id, event.status, resolved_problem_durationCreate a Metric event (for example: “CPU > 95% for ≥ 5 minutes”), and then query it later in Notebooks via dt.davis.events. Events have start and end times, so you can easily calculate each episode’s duration or total time within a window.
Because a selector like:
builtin:host.cpu.usage:splitBy("dt.entity.host")will only show values, but it won’t “stitch” consecutive points above a threshold into a single episode, nor return its length. For this type of analysis you need Problems/Events in DQL (or Notebooks), where you already have duration fields or can calculate now() - start_time for active problems.