Hello – I’m trying to better understand the new baselining feature that was described in this blog post:https://www.dynatrace.com/news/blog/dynatrace-innovates-again-with-the-release-of-topology-driven-au...
I was very excited when I saw this, however I’m having difficulty understanding the details now that I’m looking at it in the tool. Some questions:
Thanks for any info.
Solved! Go to Solution.
I think some of your questions are answered in my help page here:
- Auto-adaptive baselines are updated once a day.
- No, appears on the same level, as it shows the current state of the baseline using last 7 days of historic data to calculate the baseline. So it is a moment snapshot of the calculated baseline.
- In current state we do not aggregate over entities. The baseline is calculated once a day for every entity that is within the entity filter. So if you select CPU usage metric for 5 hosts, you get 5 baselines updated every day and possibly 5 alerts if baseline is breached.
Our plan is to introduce baselines on aggregates with end of this year as well, so that you have both options.
- Yes the alert always uses the last updated baseline value for each entity. So again, for 5 hosts you get 5 baselines, one for each host depending on the metric level. You can easily check that by creating a baseline for one host or one service alone, which should be exactly the value shown in the config screen.
Thank you for your response!
Can you please confirm my new understanding:
- "Auto-adaptive baselines are updated once a day." So this means the baseline is at a single value (e.g. 1ms) for 24 hours and then could shift to a new value (e.g. 9ms) for the following 24 hours based on subsequent data? Also when during the day does this change happen? (e.g. Midnight)
- "In current state we do not aggregate over entities." So the current selection for the aggregation picklist has no impact in the current release?
- Yes exactly, this current model adapts once a day, based on 7 days of historic data.
- The current selection of aggregate refers to the 'subminute' aggregate that is used per single line. As e.g.: host cpu metric is collected 6 times a minute for each host, with this selection you can control which line you would like to baseline, the line of the avg of those 6 measurements per minute, the max, min, or count (makes not much sense here).
One last question, promise!
Does the same apply to those baselines that were before, like errors, load and response time? Are those also adjusted once every 24hrs?
Existing settings are unchanged. By the way, CPU as well as Memory are saturation events, I would not set those to baselining, as it would not deliver the same semantics. For a CPU it might be a bit harsh to alert if a learned baseline of 30% is breached with e.g.: 35%. Here i would really leave the threshold based alerting in place.