07 Jul 2025 07:22 AM
Hi Folks,
At one of our clinets after the 1.315 AG upgrade some containerized AGs (3 out of 25) could not restart with memory shortage (ClassicFullstak). I have not found any information about the containerized AG resourse sizing or scaling. Those AGs had memory problem which ones served more the 40 workernodes.
Messages:
025-07-04 14:39:51 UTC INFO [<collector>] [<platform>, MemoryTrackerImpl] 90% memory usage. [Suppressing further messages for 1 minute]
2025-07-04 14:40:29 UTC WARNING [<collector>] [<platform>, MemoryTrackerImpl] Heap memory shortage detected: 99% memory usage in memory pool 'G1 Old Gen'
One of the clusters view without kubernets API information until the resource increase:
From the v1beta1 to v1beta5 the AG recommended sizing is:
resources:
limits:
cpu: '1'
memory: 1536Mi
requests:
cpu: 500m
memory: 512Mi
We have increased it to (in order to remediate the issue):
resources:
limits:
cpu: '1'
memory: 2048Mi
requests:
cpu: 500m
memory: 1024Mi
Is there any official recommendation for containerized AGs resources or scaling?
Thanks in advance.
Best regards,
János
Solved! Go to Solution.
07 Jul 2025 09:44 AM
Hi @Mizső
You can find it in point 4: https://docs.dynatrace.com/docs/shortlink/ag-container-deploy#deployment
07 Jul 2025 10:46 AM - edited 07 Jul 2025 10:51 AM
Hi @radek_jasinski,
Thank you very much!!!
Indeed yellow clusters were affected with more than 1000 pods. We were lucky till now with the default recommended 1,5 GB limit.
Best regards,
János
Featured Posts