I noticed that Dynatrace captured and counted periodical network activities like embedded chats as part of the loading duration. This creates false timings e.g. 100 sec instead of 10 sec for finishing functional XHRs. Or worse it leads to a timeout 180 sec. That's indeed not what is expected from the measurement of action duration.
As per support, there is no possibility to exclude requests from waterfall analysis.
What are the capabilities of managing wich resources/xhr will be included in the duration and what is the best practice for adjusting measurements?
This page details the different metrics around performance that are considered for various user actions. One possibility is looking at the metrics for the XHR action that appears to be taking longer than expected, and see if one of the other metrics seems to be a more accurate indicator of what you're looking for. For example, try looking at the "HTML Downloaded" or "Response End" metric.
I know this, using other metrics is not an option. Need to know the root cause of the problem and fix it.
Then use the required metrics with confidence as a source of truth.
How are You supposed to make automated quality gates if metrics are broken? Workaround is not an option for high-quality products like Dynatrace.