cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

This product reached the end of support date on March 31, 2021.

Why would there be large differences in memory shown per CLR and in Memory Dump?

dave_mauney
Dynatrace Champion
Dynatrace Champion

I am working with a customer that is fighting a memory leak in their app pool worker CLR.

When they take a memory dump, the total size is very small (say 250 MB) but the CLR process, and Host memory show very large memory usage (say 32 GB).

We can see in the memory charts for the CLR that the memory is not being reclaimed, so it is not likely caused by the "Force garbage collection" flag on the memory dump.

I have not seen a raw dump to verify if the before and after total memory differ, but my understanding is the raw dump uses the memory estimated by (I think) calling the same API as we use to get the heap measures we use in the memory charts.

Is is possible the memory is not reclaimed but also not accessible to our memory dump either?

We are hard pressed to definitely solve the memory leak issue if we are only able see a small fraction of the problematic memory allocation.

Thanks,

dave

4 REPLIES 4

dave_mauney
Dynatrace Champion
Dynatrace Champion

I opened a case: https://support.dynatrace.com/supportportal/browse/SUPDT-36551

rick_boyd
Mentor

The most likely answer is that the difference in the numbers that you see are in the runtime vs. in the native memory space. The appmon agent doesn't have access to native memory for purposes of snapshot

Hi Rick,

Hope all is well with you! Yes, I agree with that being the most likely case, but I am pretty sure the Old Generation chart was large for some of these dumps. I say pretty sure because I mainly focused on one case where there was a huge Old Gen heap, but in that particular case the memory dump failed because the CLR was probably too far gone. I will see if the customer can provide screen shots proving a large heap in charts vs. the size of the full memory dump.

Thanks,

dave

dave_mauney
Dynatrace Champion
Dynatrace Champion

Here is the lab comment for the Internal Case I opened for this issue:

Our .NET Memory dump only captures "alive .NET objects of one CLR". So, what you see is the sum of the sizes of all managed objects that are alive after a GC.

Other sources of memory usages, that are not included in our memory dump:


  • Native data-structures, held by the CLR. This can include loaded libraries/modules, jitted machine code, GC-datastructures, etc...
  • Native data-structures, held by other native components. Any native module that lives in your process might have memory allocated.
  • Native data-structures, held by the dynatrace agent. Our agent is also a native module, which uses up some memory. It goes without saying, that we're striving to keep this as low as possible and should not at all make up as much as you described here.
  • Objects from other CLR's living in the same process. Our agent is designed to only capture one CLR per process. There are implementations of CLR-Hosts where there can be multiple CLR's. Having a .NET 2.0 and .NET 4.0 mixed in one process is one such case, but I've seen other cases too.

Some tips on how to debug this: