Showing results for 
Show  only  | Search instead for 
Did you mean: 

This product reached the end of support date on March 31, 2021.

Recommendations for G1 Garbage Collector



I am working with a customer that switched over his instrumented JVM's to use a G1 garbage collector (GC) instead of a CMS GC. Ever since the switch over, the heap for these JVM's has been very high (90% utilization). My question is, will using a G1 GC cause the JVM's heap to run higher then the usage of a CMS GC or Parallel GC? If so, how would you suggest to alert on a JVM running out of memory in dynaTrace? (90% threshold, 95%, etc) For reference on a G1 GC, click Here


Thank You,


  1. you should always go for Suspension as a threshold first, it is the actual impact on the application.
  2. Memory utilization checks the rate of used vs. committed memory. as such it is a bad measure for alerting thresholds, a value close to 100% can be optimal (very efficient) or can be bad (exhausted). The indicator really use how much GC is going on relative to application execution.
  3. If you want to have a threshold on memory utilization, then you need to check the usage against the max allowed (different metric) and not the committed memory. 
    if usage gets to 90 or 95% of max then you are really running close to exhaustion and thus possible out of memory.



Your post is very helpful, and I will explain this to my customer.

Thank You!



So, this is my customer and think I understand what you are saying for #1 - we've setup a measure of monitoring the PP Suspension time (as it's a direct impact on the transactions).

However, your number 2 and 3 comments aren't totally making sense. Yes, we agree that G1 GC runs at near 100% - and thats just how it runs - it's supposedly efficient.

So, the issue is, even when you measure "memory usage" and not committed memory vs the MAX allowed - it still doesn't help because it's still running near 98% during a simple load test and yet response times and health of the JVM is just fine.

Any other ideas are welcomed as far as best practices. 


Just came across this discussion and even the question is already quite old I would like to add something.

Nate, you are right, I also saw such a behavior. What we currently do is to compare the usage of the old generation memory pool and compare it to the maximum memory of the total heap.

The G1 can dynamically redefine regions to be old or young generation. As long as everything is fine the G1 will try to have a large young generation. But it can also enlarge the old generation to up to 90% of the total heap (with default G1 settings). So you could say that having an old gen usage of > 85% of max heap memory is a problem.

When will the old gen usage be that high...

1) You allocate and reference a lot of memory:
the old gen will increase as the objects will get old and cannot be collected. So with a traditional memory leak (or just too low memory settings) the old gen will slowly increase to the maximum

2) The actual load on the gc is too high:
When the gc is not able to handle the amount of created/dereferenced objects it will grow the old gen very fast and end in a full gc (which then can remove a huge amount of old gen because it was only garbage that hasn't been collected fast enough)

So I would recommend to:

- prefer monitoring the suspension times

- if you really want to monitor the memory consumption to find leaks or whatever, go for the old gen usage and compare it to the max heap