Showing results for 
Show  only  | Search instead for 
Did you mean: 

This product reached the end of support date on March 31, 2021.

Session size - component contribution


Here is a session directory screenshot of a 30 min dynatrace session of size 15 GB. We know the size is too high for a 30 min session.

I'm trying to understand what component is contributing more in the session size.

From the image we know that nodes, session, attachmentstrings, attachments and sampling are the biggest contributors. Could someone help me map this to settings in dynatrace that we could tweak to reduce some of these.

I know this is not the right way to approach a problem like this however I think there is a close relation between these components and profile/sensor configuration.



Dynatrace Pro
Dynatrace Pro


I would say that 15 GB of purepath data for 30 minutes isn't unusual. But if you want to reduce the size here are a few things you can have a look at:

1) First purepath size. Go to your purepath dashlet and sort by the size column. Make sure you are retrieving all the purepaths for let's say a 5 minutes period (there is a default limit at 10,000).

You shouldn't have many purepaths of a size beyond 300 for a production system. Again there is a maximum of 10,000 nodes (or lines). If you have many PP bigger than 1000 nodes review the actual content.

You might be over instrumenting. Check you custom sensors and which OOTB sensor packs have been deployed for each agent group.

2) check that you aggregate your SQL statement and exceptions. The settings are accessible under the JDBC/ADO and exception sensors.

3) check that you haven't increased the max sql statement size neither stack trace depth for exceptions beyond the default value (again accessible on the sensor properties).

4) check if you have thousands of tiny purepaths (node size of 1). They probably provide very little value so you can exclude them from the servlet/ sensors. The agent will stop sending those. It is usually static content you are not interested in (but you might be...) or badly configured custom entry points.

These are a good starting point anyway. Let me know how it goes.



Thanks Florent for the detailed answer.

Apart from the node size all others settings have been tuned to minimum. We dont have many custom sensors and we cannot help the node count because requests go through 20+ app instances and have 100+ internal call to these app instances. DT out-of-the box tagging sensor nodes make up to 1000+ nodes for purepaths. we also have many auto sensors nodes due to bad response times.

Let me restate the question here, nodes component being the top in the session size could a lower resolution auto sensor help here. i think it will help but i dont know the sampling interval difference between the auto sensors resolutions. we have many agent groups and auto sensor changes are not that easy for a trial and error else i would have done it and checked my self.


Dynatrace Pro
Dynatrace Pro


you can try to reduce the instrumentation level of the auto sensors to lowest. It's usually more than enough on production anyway.

If you have 1000+ nodes then you are going to need plenty of disk space. I hope that you have a powerful server as it will need plenty of processing power to deal with this. It might be worth checking the dynatrace server health dashboard in case you haven't already (look for skipped purepaths etc...).

How much disk space did you allocate to your purepaths? For most customers keeping 3 days of purepaths isn't too bad. How long do they last for?


We have 8TB for session storage and we get a week or two data.

I am new to this environment and trying to understand the whole setup. And yes sometimes when many app instances come back after deployment we see RTA queue go high and skipped purepaths. I see we have a custom max RTA thread setting.

you are not in a bad shape then! 1-2 weeks is plenty.

Any long term data should be managed via aggregated measures and therefore be kept for almost as long as you wish.