As we all know: Dynatrace tracks all requests, from end to end, and automatically monitors the services that underlie each transaction.
Does this mean that this information is complete? E.g. no head or tailbased sampling?
In shot, how reliable is ths info?
Solved! Go to Solution.
from my point of view, the info should be reliable as long as the technology is fully supported, controlling the number of traces captured per process/minute if needed (check Adaptive traffic management for more details), using request attributes, and if you are using log monitoring, you can use traces log enrichment that will help with more details as well.
I hope I have understood you correctly and I hope this helps.
Great answer, thanks.
My concern was basicaly that other observability vendors all implement things like tail based and or head based sampling to limit resource usage.
So after reading about adaptive traffic, I conclude that OneAgent uses head based sampling. (to keep it simple)
So the text "all requests, from end to end", should be read as "all request, that are captured, are captured end to end" But it is possible that not all requests are captured.
This means that request attibutes can also be sampled, and that answers my question thx!
My experience is - if there is no adaptive traffic management, request attribute values are accurate. For high throughput services (way above the default limit of 1000 captured traces per process group), the counters for request attribute values might not be 100% correct (e.g. you have 2450 instead of 2445 as the counter), but still precise enough for observability - alerting, dashboards, etc.