cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Dynatrace OneAgent Pods in CrashLoopBackOff – Assistance Needed

Aboud
Helper

Hello all,

We are experiencing an issue in our OpenShift environment where multiple pods belonging to the OneAgent DaemonSet are stuck in CrashLoopBackOff state.

Details:

  • Dynatrace Namespace: dynatrace

  • Workload Name: openshift-main-cluster-oneagent

  • Workload Type: DaemonSet

  • Issue Observed: Multiple pods are continuously crashing and restarting (CrashLoopBackOff).

  • Aboud_0-1745911426354.png

     

Request:
Could you please assist in identifying the root cause and recommend the necessary steps to stabilize the OneAgent pods?

4 REPLIES 4

pawel_harbuz
Helper

Hi @Aboud

 

What deployment method do you use? Is it possible from your perspective to provide the dynakube.yaml? Could you please provide the kubectl describe of the oneagent pod? Have you tried to reach out the Dynatrace One support?

 

Paweł

Hello @pawel_harbuz ,

Thank you for your response.

  1. Deployment Method: We are using the "full observability / cloud-native" deployment method.

  2. DynaKube YAML: Unfortunately, I'm unable to share the dynakube.yaml at this moment.

  3. OneAgent Pod Details: I’ll try to coordinate with our OpenShift team to collect the kubectl describe output for the affected OneAgent pods.

  4. Support Contact: Yes, we have already reached out to Dynatrace One support and are currently working with them on the issue. They've asked us to collect a support archive and CSI driver logs, which we’re in the process of retrieving through our OpenShift team.

Appreciate your help, and I’ll share any further details once available.

Aboud
Helper

Hello @PacoPorro @pawel_harbuz ,

As Dynatrace support feedback, the issue is due to Dynatrace opearator Version 1.5.0 has a Bug.

so, we have to upgrade it to version 1.5.1 

Featured Posts