Showing results for 
Show  only  | Search instead for 
Did you mean: 

This product reached the end of support date on March 31, 2021.

Improve performance of purepath retrieval


We are experiencing long delays in retrieving a week of purepath data.  The servers meet the required specs. 

The Dynatrace server is a VM running Windows 2012 with the session storage location mapped to the SAN disk array.  What should I start with first to help improve the retrieval of the data?



Dynatrace Guru
Dynatrace Guru

When you say you want to retrieve one week of PurePath data - do you mean you want to open a PurePath dashlet and look at every single PurePath that came in in the last week? Or do you mean it takes a long time to open a dashboard that contains a chart with a timeframe of 7 days? What is the goal you want to achieve? What type of diagnostics do you want to do?

One example is wanting to look at the purepath dashlet for the previous 7 days to see either all purepaths or purepaths specific to certain agents / applications.

Another example is when loading the Web Services dashlet for the previous 24 hour time period to view all web services.  We get the following that says it will take 41 minutes to run a full analysis.


Could we see the Server Health dashboard and Session Storage Health dashboard from Start Center --> Monitoring?

-- Graeme

Here is the server health dashboard, but I don't see the storage health dashboard.  Does that exist in version 5.6?




You're right; there is no storage health dashboard in 5.6.  But I did notice a few things from the server health dashboard you posted.

It seems that the Dynatrace Server is running out of memory, and the CPU, although not strictly speaking overloaded, is pretty tight.  Is this perhaps a VM that can be expanded?

A more serious problem is that the PurePath size is far too high – you can see that the maximum is hitting the 100K limit, and the average is 20K or more.  I assume you're monitoring production systems, for which we normally recommend a PurePath length of at most a few hundred.  This means that you might be collecting hundreds of times more data than necessary, which naturally means that some operations will be hundreds of times slower than necessary.

Are you running with a lot of custom sensors?  If so, perhaps we could look at them and see how to trim them down without losing any significant data.

-- Graeme


Can we schedule a call to discuss your thoughts.  To me the server looks fine since the memory is running at 4 GB of the 8 GB that the JVM is defined at (-Xmx7936M and -Xms7936M) and the CPU is averaging 20%.


We have some large transactions that reach the node limit and have talked about increasing the limit, but I would also like to talk to you about how to trim down the custom sensors without losing data to help with the limit issues.

My time zone is EST, let me know your availability.