29 Apr 2025 08:25 AM
Hello all,
We are experiencing an issue in our OpenShift environment where multiple pods belonging to the OneAgent DaemonSet are stuck in CrashLoopBackOff state.
Details:
Dynatrace Namespace: dynatrace
Workload Name: openshift-main-cluster-oneagent
Workload Type: DaemonSet
Issue Observed: Multiple pods are continuously crashing and restarting (CrashLoopBackOff).
Request:
Could you please assist in identifying the root cause and recommend the necessary steps to stabilize the OneAgent pods?
Solved! Go to Solution.
29 Apr 2025 08:34 AM
Hi @Aboud,
What deployment method do you use? Is it possible from your perspective to provide the dynakube.yaml? Could you please provide the kubectl describe of the oneagent pod? Have you tried to reach out the Dynatrace One support?
Paweł
29 Apr 2025 03:40 PM
Hello @pawel_harbuz ,
Thank you for your response.
Deployment Method: We are using the "full observability / cloud-native" deployment method.
DynaKube YAML: Unfortunately, I'm unable to share the dynakube.yaml at this moment.
OneAgent Pod Details: I’ll try to coordinate with our OpenShift team to collect the kubectl describe output for the affected OneAgent pods.
Support Contact: Yes, we have already reached out to Dynatrace One support and are currently working with them on the issue. They've asked us to collect a support archive and CSI driver logs, which we’re in the process of retrieving through our OpenShift team.
Appreciate your help, and I’ll share any further details once available.
29 Apr 2025 12:27 PM
Check troubleshooting guide:
https://docs.dynatrace.com/docs/ingest-from/setup-on-k8s/deployment/troubleshooting
04 May 2025 12:29 PM
Hello @PacoPorro @pawel_harbuz ,
As Dynatrace support feedback, the issue is due to Dynatrace opearator Version 1.5.0 has a Bug.
so, we have to upgrade it to version 1.5.1