cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Network and Server time from S/s in DCRUM 12.2.0

taral_parekh
Participant

Hi,

DCRUM 12.2.1  version

One thing I have never really been clear on is how the network and server time are calculated when looking at them from within software services. Maybe you can provide me with a detailed explanation of this. I understand the concept of network and server time but do not really understand how this is derived from within software services.

6 REPLIES 6

adam_piotrowicz
Dynatrace Pro
Dynatrace Pro

Time metrics on Software Service or server level are averages of all operations (learned, monitored statically and all other) seen by the AMD (but not necessarily reported).

I have this same question and am wondering if you can elaborate?  In my case, we have HTTP and ssl decodes in use with one span port routing data from the output side of the load balancer to the AMD.  This is our our data feed point.  So when I see "network time" in my metrics, how, for lack of additional data feed points or some type of netflow points can I tell how much of a transaction is actual time on the network?  

For example, the flow starts when a HTTP request is made to the server, it is collected as it passes the AMD, which is more or less in the middle.  It sees the response in the other direction.  But from this alone all you would be able to ascertain is the time to and from the server.  You wouldn't know how much time the server spent on it, versus how much time was spent traversing the network.  And you wouldn't know anything about the communication time to the client, how long it took to traverse the network to the AMD, or back.  I assume then, it must be using date/time from the protocol content to ascertain more specific timing.  (I wish I could find an explanation of this similar to the diagram that is readily available for Client and Server RTT, but I cannot.)

So the short question is, when I see "Network Time" in my metrics for the configuration above, what am I really seeing?  Is "Network Time" really just everything on the client side of my AMD and "Server Time" everything on the server side, or is this really the time spent "On the Network", in other words transfer latency, and how does it know?

MichaelFerguson
Dynatrace Helper
Dynatrace Helper

Pete,

Not sure if this is what your looking for but there is a more detailed diagram for HTTP at the bottom.   From this you can see Network time has two parts that includes request time and download time, etc..

Graphical Explanation of Network Performance Metrics

Hope this helps.

Mike

That's what I was looking for, but the acuity is so poor I can't read it.  I can't open the attachment because the file type is not recognized by any program on my computer.  Any way to get a readable copy?

Does this help? Download here: operation-time.png

pebalm
Guide

Yes, I can read that!  If I could draw your attention to the "RTT" box on the left side.  Shouldn't that either be larger, to be a "round trip", or be labeled 1/2 a round trip?  That doesn't look like a round trip to me.

Now, with a legible diagram, I'm trying to reconcile where DCRUM gets it's timings to ascertain the various latency measures.  Does it do so with "local recording", that is by recording the time at which the traffic arrives/passes the AMD?  Through analysis of time stamps found in the protocol content, or both?

For example, the diagram explanation of Client RTT and Server RTT show the AMD in the middle, with timing on either side.  This would imply to me that RUM is recording the time as it passes the data feed point.  The diagram above, however, makes no mention of the AMD, leading me to believe all latency is ascertained through time stamps in the protocol.  That is time recording is done with content delivered to the AMD, versus recording of the traffic send/receive times at the AMD.

Take "SSL Connection Setup Time", as seen in the diagram, for example.  I would extract from a protocol date/time stamp the time at which the request is made by the client, put on the network to the server, requesting the server's cert.  There are a few possible exchanges in there, but the handshake process should end when the server sends a message to the client indicating the server portion of the handshake is complete and the session may now begin.  I would not be able to ascertain the time this final communication is actually received by the client unless there is something at a lower (tcp?) level that is acknowledging the receipt of these packets and providing a time stamp of when it did so.

Assuming that is possible, I can only reach the conclusion that the "SSL connection setup time" then, is calculated from the time the client puts its initial SSL request "on the wire", until the time the client receives from the server confirmation that the process is complete, all based on date/time content of the protocol (at some level).

Furthermore, it would seem important to realize that not all of what's being called "Network Time" is actually time on the network.  There's a lot of processing time (checking certs, etc.) in there on both the part of the client and the server.

The reason I'm trying to dig to level of detail is, we have this scenario playing out right now, where an application's response time has increased when compared to historical data.  RUM is telling us "Network Time" is the major component of the transaction, yet all our RTT's are very small, leading us to believe the actual network transmission time is not the culprit.