Showing results for 
Show  only  | Search instead for 
Did you mean: 

This product reached the end of support date on March 31, 2021.

Discrepancy between DCRUM's server time vs Managed's Purepath, and also discrepancy between Purepath and UserAction Duration

DynaMight Pro
DynaMight Pro

In my past experience with DCRUM and AppMon's Purepath, there would always be discrepancy and it would be small. But now, I see a huge discrepancy between DCRUM and Managed's Purepath. (and it happen quite often)

As you can see in diagram above, I purposely choose a timeframe where there is only one transaction to make sure we eliminate other 'noise' in our discussion.

Now, let's look at this diagram below:

In this diagram, I capture the purepath details as well in case any of you would somehow need the info. Anyway, the things that matter to me is again, to show that within the timeframe, there is only one purepath, and thus only one user action correspond to it, which is expected. Great.

BUT, as the last diagram (the diagram below) shown, the user action duration is shorter than the Purepath's duration. I wonder how's that possible, isn't it that user action should always longer than purepath duration? To make sure I am looking at the correct data, I try to "drill back" to the purepath from this user action, and it indeed bring me back to the same purepath before.

Also, the time period shown (as marked by 'btw' by me in the last diagram) seems to be a bug?

Thanks again if you've read it until here, so although this is only one post in the forum, but there're actually 3 doubts:

1. Huge Discrepancy between DCRUM's server time and Purepath duration.

2. User action larger than purepath duration

3. A bug of time period shown in 'purepath' dashboard somehow

Since it involved two product, I posted this in both DCRUM and Managed Forum.


Dynatrace Pro
Dynatrace Pro


At the first sight this situation looks strange, but when thinking about possible scenarios - I can imagine several such scenarios where such measurements could indeed happen. Please keep in mind that everyone of those measurements - user action, network traffic, server's purepath - is just a model of application behavior and as every model, it is good to represent some scenarios and not good for the others.

From the DC RUM perspective, a viable scenario that would result in falsely reported long server times is when an app has been designed that way so client (browser's javascript) requests some content and waits on the connection until server side has some new data available. It is not because server processes the request. In this case the plain HTTP request-response model is used to simulate an asynchronous callback. Server will respond to a request only when it has new data for the client, or it may respond that it has no new data (after waiting for e.g. 30 sec), which would cause client to open a new request, and the process continues. The result reported by DC RUM would be a notoriously long server time - which correctly reflects what happened on the wire from the HTTP perspective, but not what the app developers implemented (circumventing the HTTP in a clever way).

There are configuration means in DC RUM to prevent such measurements, e.g. exclude some URLs from monitoring or closing page loads upon detection of specific content elements that explicitly indicate end of page.

I can imagine that in UEM, when measuring action duration, situation could be similar: action is finished, but some background javascript on the browser side is still waiting for the server response to come back, and when it comes, an async action of the page content update is triggered. If what comes back is empty, no update is triggered, so from the user perspective the action ended up long time ago (which would mean UEM was correct).

So perhaps the mystery could be solved through a deep dive with the app developers? Or at least starting with two deep dives: one into what is going inside the PP throughout the app tiers, and another into ADS level of the DC RUM measurements (to see server times for individual hits and identify which one is responsible for the lengthy measurement), or - ideally - a packet trace captured for such transaction one the AMD (to confirm where's the packet that finishes the server time measurement, and what does it carry).

Best regards