Each running Agent collects important information about the instrumented process such as used memory, GC time, CPU usage (percentage of the CPU capacity currently used by the monitored process), thread count, and transactions that pass through this node.
You can change the default values from the
Thank you for your information. But I do not understand why we should update DT serve's init not agent's? Agent runs out of memory, not server. I am lost here.
and what D<property> is for this case?
The message you are getting on the Dynatrace Server is because of default configuration of host health.
Out of Memory errors in Java and .NET occur when the VM/CLR cannot create a requested object because it cannot allocate any more memory. This can happen for several reasons:
First you can do the Memory Diagnostics. The below 'Performance Clinic' is useful for step by step process.
Secondly you can increase the memory for that specific JVM.
I faced simular situation when installing new application servers (was85) with very low load.
The incident is raised as designed :
"The process constantly spent more than 15% of it's execution time for Garbage Collection in the last 5 minutes"
Where the GC-incident normally is raised at the moment a Application Server gets into memory-issues (often due to memory-leaks) and is a signal to take action (restart application server) on short time and look into memory-consumption as root-cause analysis, in case of very low loaded application servers (= very low execution-time) a deep GC can initiate this incident. What also is different in the case of low-loaded AS, is that this incident is only open for a (few) minute(s) and then is closed. So now our hosting-teams are first having a proper look to the application server health, if GC-time is low (few ms) and not repeating continuously (= real GC issue) no restart is given and incident is closed.
I experienced that once the load on these servers is going up, the gc-incidents are not comming anymore (at least not if server is not in problems).