26 Oct 2025
06:32 PM
- last edited on
27 Oct 2025
08:46 AM
by
Michal_Gebacki
Hi Folks,
At Managed I met these functions at enterprise level. At SaaS DPS maybe the importance of these funtions are more important. Why? There is a limitation of the ingested trace data volume:
https://docs.dynatrace.com/docs/shortlink/dps-hosts#distributed-traces
"Dynatrace provides the ability to extend the amount of trace data ingested with OneAgent. To do this you can request an increased Trace ingest limit beyond the 200 KiB of trace data included per memory-gibibyte."
Because of the 200 KiB of trace data included per memory-gibibyte the Adaptive Traffic Management will be activated if you cross this included limit.
Adaptive Traffic Management with Dynatrace Platform Subscription (DPS) — Dynatrace Docs
Here is an example when ATM is activated. Trace and request capture rate is dropped under 40%.
You can check this at the Full-Stack Adaptive Traffic Management (gen3) dashboard.
On the dashbord you can increase the factor 1.0 to check the missing trace data volume (or predicted extended ingest volume data). You can also check the included limit (calculated based on the full-stack covered memory size) and configured limit (this can be increased by support). Orange area is the dropped data volume and could be the subject of extra ingest costs)
What could be the solution to increase the capture rate of the trace and requests.
1. Exclude unwanted / unnecessary incoming webrequest URLs like /helath, /metrics, /actuator and so on. I am still a Managed fan so I checked the candidate web requests in MDA, but you can use DQL also.
2. Exclude unwanted / unnecessary exceptions. You can use also MDA and DQL for find candidates.
3. Raise a support ticket and ask them to increase the factor 1.0 based on the requirements . Of course this should be done after two previous steps. In this case the factor increased from 1.0 to 2.0 and the configured ingest limit increased two times compared the included limit. The subject of extra ingest cost is the data volume which is above the included limit (orange area).
Capture rate is almost 100% again:
4. You can combine the DEV / UAT / PROD in one SaaS environment in order to have a bigger covered memory size. Maybe this solution is not acceptable for many organizations.
+ tipp: You can also use the new Distributed Traces app to check the spans ingest sizes:
At Managed clusters the first two steps also important in order to decrease the required cluster disk space.
I hope it useful for you.
János
26 Oct 2025 06:55 PM
@Mizső ,
Yes, the migration to DPS has some gotchas! Dynatrace has already raised the limit to 200 KB (it was 45 KB before...), and I believe it has to be raised more, to put it in line with Classic Licensing.
You can also check "Trace sampling for HTTP requests", which might help in getting some sampling for requests you don't want to exclude.
And yes, from my experience, exceptions are one large contributor in many environments.
26 Oct 2025 07:15 PM
I agree with you. I was shocked when I saw these capture rates.
Regarding sampling some information form Support:
"Rules to exclude or change the sampling rate only apply to the root span that starts the trace.
Thus, if you see traces where ServiceA is calling ServiceB and the sampling rate on ServiceB is low for a particular request, then a sampling rule should be set for requests on ServiceA.
Also, to have a higher capturing rate for requests in the service, some other requests for the same service should have a lower capturing rate."
Best regards,
János