Showing results for 
Show  only  | Search instead for 
Did you mean: 

Bug in Dynatrace 0.15.0 - Erreur : Pod XX invalid: spec.initContainers[1].name: Duplicate value: "install-oneagent"



We encountered a pod injection problem when using Dynatrace in applicationMonitoring.

What we did:
Dynakube: Disable monitoring of all applications : false



On a specific pod: We enable injection
annotations : : 'true'



We've gone from version 0.12.1 to version 0.15.0 and now pods can't be scheduled due to incorrect deployment generated by mutatingwebhook.


So what's wrong ? :

All dynatrace pods are deployed, and the webhook is trying to update the configuration of my app to inject Dynatrace but it fails because of this error : 

invalid: spec.initContainers[1].name:
Duplicate value: "install-oneagent"


The problem is that the Webhook is trying to inject 2 initContainers with the same name on my fresh app deployment that does not contains any initContainer.

I've analyzed the subject in greater depth.
The problem lies in the mutatingwebhook delivered by Dynatrace.
There's a webhook idempotency problem. This is due to the option used

reinvocationPolicy: "IfNeeded"

The webhook is obviously called twice, which causes the "install-oneagent" init container to be doubled.
The webhook has to check if the initContainer has been already added.

This bug has clearly been introduced in releasenote version 0.15

Kubernetes doc related to this :

Slack editor has encountered the same problem has yours ! :

"we added reinvocationPolicy: IfNeeded to the MutatingWebhookConfiguration, resulting in the webhook often getting called twice. This is one of the reasons why mutations should be idempotent!"

We are exactly is the case described in the documentation due to idempotency problem :

"In the third case above, reinvoking the webhook will result in duplicated containers in the pod spec, which makes the request invalid and rejected by the API server."

So the problem is not on the Mutating object but within the dynatrace-webhook pod.


If i try to set reinvocationPolicy: "Never", the problem disappeared and pods can be scheduled.

Hope you fix this issue fast and take necessary actions in order it does not reproduce again, because yes it is a critical issue that can lead to a huge downtime.



Dynatrace Champion
Dynatrace Champion

Hi @Whisper40 


Please create a support ticket via with the same description. Tech support will help you with this issue. 



Featured Posts