I'm investigating a problem where Dynatrace's milissecond precision comes in hand. I have doubts though in what happens when servers are out of time sync. For what I recall, Dynatrace handles this automatically, but when analyzing at this level, even milissecond differences count.
Is it possible to know how Dynatrace deals and corrects for time differences between servers?
Additionally, I have put in a RFE that would probably be quite cool:
Solved! Go to Solution.
Currently, when you join a new cluster node it checks if time is in sync with the seed node. If it's not in sync - join fails. Moreover, periodically one node checks time differences with other nodes and in case they are not - it creates a cluster event.
We recommend you run NTP service to make sure cluster nodes' times are in sync
I think I did not express the question correctly. My concern has nothing to do with Saas/Managed servers. I'm only concerned with the time on servers with OneAgent. For the following, I'm going to accept that Dynatrace servers are in perfect time sync.
ServerA might be half a second behind Dynatrace (would say t-0.500ms) and ServerB half a second ahead of Dynatrace (would say t+0.500ms). When a query is sent from ServerA to ServerB, what does get registered in Dynatrace? Would it be, say t?
My problem is matching this information with other information, namely what appears in logs. It would be important to know exactly what Dynatrace timestamps refer to...
I think time differences does not impact metric reporting. The time a call happens to a service, OneAgent reports it to Dynatrace and it's stored with the time it arrived. The worse problem is when cluster nodes and storages are not in-sync - that's why there's a requirement on Managed nodes and not on monitored hosts.
Logs are a different stuff... it all depends on your application how it logs the data. Whatever is written to the file, it will be reported that way to Dynatrace.