Our client is using the performance and/or page load distribution metric for a particular service to present a performance figure on their annual report. As a result, it attracts a lot of scrutiny from their auditors. So they want to know, in detail, how we're averaging the data to get from one transaction to one year of transactions. So ideally this would come from some official dynaTrace documentation somewhere.
I can't seem to find this in the documentation outside of the network performance diagram and url aging.
Solved! Go to Solution.
Find the following descriptions about the performance and baseline calculations data.
The percentage of front-end operations completed in a time shorter than the defined performance threshold compared to all successful operations.
A baseline is the data from the last several days (usually nine days) aggregated into one “average” or “typical” day. Baselines are necessary for considering the variations in traffic on different days of the week, random anomalies in traffic load, or to compare traffic with a known baseline from a specific point in time.
Baseline data is generated once a day after the arrival of data from the first monitoring interval after 00:10 am (in the background). Baseline data is not averaged over the day within each day and therefore may vary rapidly depending on the time of day – just as monitored data would. Each monitoring interval is assigned the value averaged over the nine-day period for this specific monitoring interval. Requesting baseline data for Yesterday will yield the same results as requesting baseline data for Today, because baseline data for yesterday will still be calculated over the last nine days counting from today.
"The percentage of front-end operations completed in a time shorter than
the defined performance threshold compared to all successful operations."
The problem here, in my understanding, is that this is being averaged across pages, users etc. in five minute intervals, which are in turn averaged across hours, days, months years.
The auditors want to have some formal explanation of how this is being done. In other words, can we in good conscience say that 97% performance for a year is an accurate measure of user experience.
Here the 'Performance' means is 'Application Performance' and the formula ("The percentage of front-end operations completed in a time shorter than the defined performance threshold compared to all successful operations.") is applied on this metric and if you plot this metric over the time to get the overall performance of application.
Below is an example of affected users due to application performance and for all other metrics you can check the given link.
Affected users (performance)
The number of unique users that experienced application performance problems or network performance problems.
Thanks for your reply.
What I'm looking for is how those figures are calculated in detail. So for instance, in a particular five minute interval we might see 150 different pages all accessed a different number of times by approximately 500 users out of a possible 50 000 odd unique users. So are the pages averaged by user or by page. Is the averaging weighted or unweighted. Do we add the sum of all the operations for that five minute interval together and then divide it by the number of operations. How is this then calculated by hour, day etc.
The metric can be filtered based on other dimensions - such as the page name, or client location.
If you do not include those filter dimensions, then the value is the average of all pages and all users for the time period you specify.
If you added the 'Operation' dimension, and run the report, you would get (say) 10 rows, each one a different page (Home page, login page, checkout page etc.), and the performance metric on each row would be specific to all users for that page.
If you were to then add 'Client site' dimension, you then have (say you have 10 pages and 10 client sites) now 100 rows of data, each row's Performance metric specific to the users visiting a specific page at each of your locations.
Now to start aggregating over time, Add the Time dimension to the report, and now you'll have an extra row for each 5 minutes of the day, change the resolution to hour, and each of those hour rows metrics will be the aggregate of 12 5-minute rows. bytes will be summed, packets summed, rates (e.g. packets per second) averaged etc. operations summed, slow operations summed, thus the performance metric will still be % of operations that weren't slow.
The same occurs going to daily, weekly, monthly etc. the metrics are aggregated by the method that makes most sense. volumes are summed, rates/%'s averaged, etc.