I read somewhere that SSL was at one point part of network time and i saw it is definitely excluded from it in 2017, but not sure what the position is with 12.4. I have an instance where we simultaneously moved to a new internet link and introduced SSL. So I can see the SSL connection time, but the client round trip time and overall network time also increased significantly, so I'm trying to work out whether this is caused by the new link or the introduction of SSL. At the same time, the server loss rate went up to about 20% so we're thinking perhaps the new ISP is not quite up to scratch.
Solved! Go to Solution.
SSL connection setup time is reported separately in when AMD architecture is HS, so always in 2017 release and in 12.4 - when HS AMD is used. Same is true about RTT - it is not a part of the page load time too, use separate metric to look at this KPI. https://university.dynatrace.com/education/dcrum/... has more on this topic.
In your case the 20% loss rate looks suspicious. It's simply too high to be realistic. It might be because SPAN or tap on the new links does not deliver clean traffic, so the incoming traffic diagnostics where you should probably look first and the way get confidence in the network time measurements.
Sure the metrics are there and they have always been. Same as RTT measurements. The change in HS AMD is that these are not accounted as a part of the network time for a page/operation. If RTT is say 50ms, and SSL connection setup takes 300ms, then 2017 operation time reported will be at least 3x50+300=450ms lower than 12.4 classic. In reality it will be more because of idle times that always occur.
Hope this helps
Thanks for the responses guys. @Kris Z. I watched that video earlier this morning, I was just unclear about whether that only applied to 2017 or 12.4 and as you mention only the HS AMD. We do have an HS AMD and so are getting SSL reported separately on the main performance tab rather than on the network tab. But at the same time SSL was introduced, the ISP was changed and even if we discount the loss rate, client RTT went from <10ms to ±300ms, realized bandwidth from about 9Mbps down to 1.5Mbps and network time from 67ms to 800ms, so I need to be absolutely clear that this could in no way be caused by the introduction of SSL, but is rather a very poor quality ISP.
Introduction of SSL wouldn't affect RTT. Sort can be safely excluded that introduction of the SSL degraded general network quality. The only thing that comes ti mind is to ask where exactly is the SSL encryption occurring? I can imagine a setup where there's a proxy in the cloud trough which all traffic goes (like the Zscaler), encryption occurs there. and if the nearest cloud node is far away. then the RTT would indeed be long, and overall user experience will degrade. But the first thing to verify is the 20% loss rate. It just doesn't look right.
The SSL encryption is being done on the local webserver. Any suggestions on how to verify the loss rate would be appreciated. If I look at the traffic in wireshark, there are actual retransmissions, so my assumption is that it is set up correctly. That said we're monitoring inside the firewall, so there could be stuff going on downstream that we're not seeing. Initially when the line was introduced, there was a massive amount of "junk" traffic on it and whilst that was sorted out, we have had other issues with the ISP which is making me suspicious of them...
Start with the CAS>Diagnostics>Traffic Diagnostic reports. Look for traffic levels that may indicate interface saturation (packets Amy be dropped by interfaces on the packet broker side), Sequence number gap rate (will indicate if drops occur), IPv4 duplicates (high ratio may indicate deduplication buffers are overloaded and this causes false retransmission rate measurements). In case of doubts you can always open a Support call and ask for help with verifying the traffic quality in contact of measurements.
Thanks for this. There was indeed a high number of duplicates, I was not aware that this could affect the the retransmission rate figures. Essentially I was spanning the port where the firewall connnected as well as the port where the blade chassis connected to see all traffic connecting to the servers in that chassis. I tried selectively turning them off, one and then the other and got similar figures from both ie. around a 20% drop in user count with a more or less steady operation count. So I have left the connection to the chassis off but this leaves the possibility that we may miss traffic not coming through that particular firewall connection. This has brought the loss rate down to around 4% so much better. In terms of verifying the number of users, if I check it against Appmon for the same time interval, DC-RUM is about 20% shy of the appmon figure. The RTT figures are still pretty much the same and still therefore slower than the old ISP and we can definitively say that this is not related to the implementation of SSL. The client is busy implementing a new network design with links to multiple ISP's so we should be able to get a clear picture of how the link performance compares between ISPs. Thanks for your help !!