In the dynaTrace Server Settings you have to increase the server memory setting. The more memory you give the server the more it will use for the purepath buffer.
This is one way to get more PurePaths into the "Live Session". Another thing to look at is to reduce the length (size) of PurePaths. We often see installations with PurePaths that have several thousand nodes on average - typically caused by overinstrumentation. dT 5.5 also provides new features such as the DB Aggregation that can be turned on throught the ADO.NET and JDBC Sensor. it will greatly reduce the length of the PurePath. Also - have a look at your exceptions and log messages. You might be able to exclude certain exceptions that dont give you value from being captured also resulting in shorter PurePaths and with that opening up more space for other PurePaths to fit in the Buffer
There is a balance between memory required to properly process incoming PP data, and memory required for processing data for analytics (driven by users using the dT client), especially in high-volume applications. I've been fine-tuning this at large customers with good success, pushing the PP buffer lower while keeping the heap size higher, and still avoiding costly GCs on the dT server. If support was suggesting that you push the PP buffer size lower, perhaps they were taking you down this path. If you have a high-volume app, I'd suggest that you reopen your case to discuss this with them. They can share a parameter that lets you control the PP buffer size directly without having to raise or lower the dT heap.
And of course Andi is correct by saying that you should do all you can to control overall PurePath length by removing over-instrumentation, etc.
Exactly i am more concerned about the memory required for analytics and would not want to affect the processing of incoming data and it was obvious that reducing the heap memory would affect the later.
I got a reply from the support regarding the buffer size parameter and will update you guys regarding its results.
One Best Practice that I've seen some customers do is to install a separate dynaTrace Server that is purely used for stored/offline session analysis. This server wouldnt require a license and is really just used to "host" sessions that you export from your production enviornment. Those groups that need to access the stored sessions can then analyze these sessions on that server without impacting any CPU or Memory resources on your prod server
Well our scenario is that we need to have up to date xml exports for the dashboards to be used in a third party reporting application,and the above solution would require extra mechanism to periodically export in case there is a reliable solution to do that.
Have stumbled on your post while looking for more information regarding PP buffer size. Regarding point mentioned by you "I've been fine-tuning this at large customers with good success, pushing the PP buffer lower while keeping the heap size higher, and still avoiding costly GCs on the dT server".
Can you provide some insights regarding typically what steps / or what you look for, to reduce PP buffer size.
What version of dynaTrace are you running? Here is my advice per version:
Hope that helps,