07 Oct 2025
06:40 PM
- last edited on
08 Oct 2025
07:47 AM
by
MaciejNeumann
Team! I'm having this problem when scaling an application (PODs) 400 times on a 5-node cluster. This is a test to validate the situation of a Kubernetes cluster upgrade. Where should I configure an annotation to avoid this limit?
Kuebernetes = 1.31.10
operator = 1.7.0
mode = cloudNative FullStack
full error on the pod
MountVolume.SetUp failed for volume "oneagent-bin" : rpc error: code = ResourceExhausted desc = rate limit exceeded, current value 21 more than max 20
Regards
Carlos Carrasco
16 Oct 2025 03:01 PM - edited 16 Oct 2025 03:26 PM
Hi Dynatrace, we are facing the same issue here :
OpenShift 4.14
Kubernetes 1.27
Operator : 1.7.1
OneAgent : 1.321.51.20250905-075429
We drain all the workers node to let all OneAgents inject themselve into application's pod.
And we saw the same error (500 errors on each nodes with 150 pods by node).
I dig the CSI Dynatrace implementation and find the hard limit here : https://github.com/Dynatrace/dynatrace-operator/blob/79736782947cf49f677cf4dcea041de6db331b4a/pkg/co...
Right now it seems the maximum number of GRPC is 20 and is hardcoded ...
This limits can delay the pod startup (32 seconds for the csi to allow the mount)
You can see the raising retry (500ms, 1s, 2s, 4s, 8s, 16s)
Oct 09 21:20:58 node-name-anonym kubenswrapper[1726]: E1009 21:20:58.577489 1726 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin podName:38986208-2b02-440f-b10d-253f2c791446 nodeName:}" failed. No retries permitted until 2025-10-09 21:20:59.07747995 +0000 UTC m=+12376458.762968127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oneagent-bin" (UniqueName: "kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin") pod "pod-name-anonym" (UID: "38986208-2b02-440f-b10d-253f2c791446") : rpc error: code = ResourceExhausted desc = rate limit exceeded, current value 21 more than max 20
Oct 09 21:20:59 node-name-anonym kubenswrapper[1726]: E1009 21:20:59.180261 1726 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin podName:38986208-2b02-440f-b10d-253f2c791446 nodeName:}" failed. No retries permitted until 2025-10-09 21:21:00.180246063 +0000 UTC m=+12376459.865734228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oneagent-bin" (UniqueName: "kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin") pod "pod-name-anonym" (UID: "38986208-2b02-440f-b10d-253f2c791446") : rpc error: code = ResourceExhausted desc = rate limit exceeded, current value 21 more than max 20
Oct 09 21:21:00 node-name-anonym kubenswrapper[1726]: E1009 21:21:00.277374 1726 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin podName:38986208-2b02-440f-b10d-253f2c791446 nodeName:}" failed. No retries permitted until 2025-10-09 21:21:02.277356443 +0000 UTC m=+12376461.962844610 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oneagent-bin" (UniqueName: "kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin") pod "pod-name-anonym" (UID: "38986208-2b02-440f-b10d-253f2c791446") : rpc error: code = ResourceExhausted desc = rate limit exceeded, current value 21 more than max 20
Oct 09 21:21:02 node-name-anonym kubenswrapper[1726]: E1009 21:21:02.475766 1726 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin podName:38986208-2b02-440f-b10d-253f2c791446 nodeName:}" failed. No retries permitted until 2025-10-09 21:21:06.475749514 +0000 UTC m=+12376466.161237691 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oneagent-bin" (UniqueName: "kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin") pod "pod-name-anonym" (UID: "38986208-2b02-440f-b10d-253f2c791446") : rpc error: code = ResourceExhausted desc = rate limit exceeded, current value 21 more than max 20
Oct 09 21:21:06 node-name-anonym kubenswrapper[1726]: E1009 21:21:06.577834 1726 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin podName:38986208-2b02-440f-b10d-253f2c791446 nodeName:}" failed. No retries permitted until 2025-10-09 21:21:14.57781502 +0000 UTC m=+12376474.263303193 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oneagent-bin" (UniqueName: "kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin") pod "pod-name-anonym" (UID: "38986208-2b02-440f-b10d-253f2c791446") : rpc error: code = ResourceExhausted desc = rate limit exceeded, current value 21 more than max 20
Oct 09 21:21:14 node-name-anonym kubenswrapper[1726]: E1009 21:21:14.657665 1726 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin podName:38986208-2b02-440f-b10d-253f2c791446 nodeName:}" failed. No retries permitted until 2025-10-09 21:21:30.657659187 +0000 UTC m=+12376490.343147366 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oneagent-bin" (UniqueName: "kubernetes.io/csi/38986208-2b02-440f-b10d-253f2c791446-oneagent-bin") pod "pod-name-anonym" (UID: "38986208-2b02-440f-b10d-253f2c791446") : rpc error: code = ResourceExhausted desc = rate limit exceeded, current value 22 more than max 20
Do you have some guidance regarding the issue ?
21 Oct 2025 01:19 PM
Prgss:
Please try increasing the limit by setting the following env variable for server container: GRPC_MAX_REQUESTS_LIMIT (Current default value is 20).
Container name: server
env:
- name: GRPC_MAX_REQUESTS_LIMIT
value: "20"
One line via kubectl :
kubectl set env daemonset/dynatrace-oneagent-csi-driver -c server GRPC_MAX_REQUESTS_LIMIT=30
Regards
Carlos Carrasco
27 Oct 2025 02:14 PM - edited 27 Oct 2025 02:15 PM
Hi @CarlosCarrascoR , thank you for your reply.
What values did you used ? 30 ?
Do you have less error messages on the nodes ?
What about the csi pods ? Did you raise up the cpu request to absorb more parallel grpc requests ?
Thank you for your feedback.