I have configured a transaction in Enterprise synthetic ,and I could see the C-N-S time for that transaction
I would like to know what is the mechanism it will follow to obtain C-N-S time.
In one of my usecase I could see most of the time its spending on Client tier, where as in reality its not on Client tier more on server tier.
so ,I would like to know how the tool is determining the CNS values ?
Can anyone help me on this ?
Solved! Go to Solution.
There are two parts in your post: general information about
CNS and a higher than expected client time you are observing.
For the first part I’d recommend that you start with the
following documentation and forum pages:
Please also take into account that Dynatrace Network Analyzer
(DNA) calls are made through Open
Active Script API Methods and Properties calls related to the
transactions, rather them directly from the VBA code. In turn, these calls are normally made
through the Framework StartTrace and StopTimerAndTrace calls, rather than
directly from the script.
Talking about higher
than expected client time, while it is difficult to judge without specific
information, this is often a consequence of starting the transactions
prematurely. Transactions should not
include user actions related to filling in information without communication to
the server. Please review Representing
Transactions in the Script.
opening a support
ticket if you cannot address your problem with the information in
the Dynatrace Community.
That last link "ESM - Good To Know" is from the internal DC RUM forum and it gives me an access denied error. If that one contains publically shareable content, could you perhaps copypaste it here?
I would like to share the following
insights on why the CNS timings (Client-Network-Server) are usually
always bigger than the "Transaction Time" metric in Enterprise Synthetic
Management (ESM 12.3.6 and CAS 12.3.8). We got this information from
our development (special thanks to @Mani Mukherji). Here is an example of what I mean:
Transaction Time metric is reflected on where you set the respective
StartTrace and StopTimerAndTrace timing tags in your Robot script:
sends a "StartCNS" request to the DNA module and then starts the timer
for the transaction. This ensures that the DNA capture contains the
packets that originate at the beginning of the transaction. The DNA
module requires some time to start up and that time is captured in the
"(C+N+S) - Transaction Time" difference that we see and this is
The StopTimerAndTrace stops the transaction
timer and obtains the CNS values from DNA - there is hardly any overhead
in the latter operation and it does not contribute towards the "(C+N+S)
- Transaction Time" difference.
There should always be a
consistent difference of around 400 ms (on average) between the CNS and
the Transaction Time, which is unavoidable and cannot be bridged.
Thanks! I can't see the images, but that's fine - the text content is enough for me at least.
I assume that 400 ms of overhead goes into Client Time, since that's where starting the DNA module takes place...
I have reattached the illustrations, hopefully you can see them now. By the way, as you can see from the first line of the illustration, you cannot think that the difference is for the client time only. It's applicable to the whole sum, not to individual measurements.
Actually I can think that, since I have several transactions that display 98 % Client Time, with barely any Network or Server Time. In your example, it's true that the top row only has 158 ms Client Time so some of the overhead surely went to Server Time in that case. But in my own examples, Network + Server Time can be around 50 ms total, so in those cases the overhead pretty much went to Client Time. Appears to be somewhat random then.