23 Jan 2026 06:46 AM
Hi,
On Exadata Node 1, we are observing that the reported process-level memory consumption appears to exceed the available physical memory on the host.
While reviewing the Dynatrace metrics available to us, we found that only the metric builtin.generic.mem.workingSetSize is consistently reporting values at the process level. However, this metric includes shared memory, which can lead to inflated memory usage figures when viewed per process—especially on systems where multiple processes rely heavily on shared memory segments.
We would like to understand whether it is possible to obtain a more accurate representation of memory usage for each individual process, specifically excluding shared memory. Ideally, we are looking for metrics that reflect private or unique memory consumption per process (for example, resident private memory, private working set, or a similar concept).
Could you please advise if Dynatrace provides such metrics out of the box, or if there are recommended approaches (custom metrics, extensions, OS-level integrations, etc.) to capture this level of detail? Additionally, any guidance on best practices for interpreting process-level memory usage on Exadata systems would be greatly appreciated.
Thanks!
Featured Posts