cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Synthetic jobs - length hostnameResolutionTimeInMs - causing problem tickets

ct_27
DynaMight Advisor
DynaMight Advisor

Over the past few months we've been experiencing a high number of synthetic problems.  Originally due to the 'adapt request timeout' defaulting to 10 seconds but we increased all of those to 60 seconds.   

 

After some further analysis we noticed "hostnameResolutionTimeInMs": 60013.  We then went to our Synthetic Active gates and did some DNS lookup testing and didn't see any such delays.  We're running OneAgent on these ActiveGates and looks at the network/DNS metrics and it also indicates no more than 15ms to resolve.

 

Yet, our synthetic scripts are failing and reporting lengthy resolution times.  Has anyone else experience similar issues?  Our synthetic machines are in the green, so they shouldn't be overloaded.  We're still analyzing but thought I'd ask the community.

 

 

"responseStatusCode": 0,
"totalTimeInMs": 60090,
"responseSizeInBytes": 0,
"responseBodySizeLimitExceeded": false,
"hostnameResolutionTimeInMs": 60013,
"tcpConnectTimeInMs": 2,
"tlsHandshakeTimeInMs": 4,
"timeToFirstByteInMs": 0,
"redirectsCount": 0,
"redirectionTimeInMs": 0,
"peerCertificateExpiryDate": 1917774551000,
"failureMessage": "",
"waitingTime": -60019

2 REPLIES 2

dannemca
DynaMight Mentor
DynaMight Mentor

Some of mine Synthetics started to alert with timeout on some HTTP events, right after we update the AGs to lastest version (1.233.152), but I did not remember if was the hostnameResolutionTime which was causing it...
All went just fine after we restarted the AG process.

Site Reliability Engineer @ Kyndryl

ct_27
DynaMight Advisor
DynaMight Advisor

thank you for the suggestion. We did a restart on Thursday and gave it a few days.  Unfortunately it did not fix the issue.   We opened another chat which turned into another support case on the same.

Until then we're getting around 70 false problems per day.