08 Jun 2023 10:00 AM
Hello, We are in a process of using this new Dynatrace Cloud Native deployment to monitor my Kubernetes Cluster.
Right now we use Classic Full Stack and there is a limitation of one agent Pods to be running first so that we can get the deep level monitoring of application pods otherwise everytime we would have to restart the app Pods.
Apparently the Cloud Native deployment has come up with a solution to overcome this limitation. BUT it seems like it is also bringing a very hard dependency on One agent Pods i.e. during the node scaling app is waiting on the One agent and csi-driver pods to come up first which concern me with 2 things -
Let me know if anyone has any advice on this please.
Best Regards,
Shashank
Solved! Go to Solution.
08 Jun 2023 11:02 AM
Hello @agrawal_shashan
I might be misunderstood but have a look at the configure failure policy.
Regards,
Babar
08 Jun 2023 12:09 PM
Hi @Babar_Qayyum Thanks for the response. I have had a look into the link you sent. It is useful but does not really suit for my use case. My use case is since the app pods are waiting for Dynatrace Pods to come up, in a worst case scenario what happens when dynatrace pods are not coming up? I believe that will cause the app pods to wait and our application will be unavailable. I am sure Dynatrace must have thought about it right?
Best Regards,
Shashank
08 Jun 2023 12:18 PM
For the reference see https://community.dynatrace.com/t5/Product-ideas/RFE-Improve-init-container-resiliency-when-CSI-driv...
@florian_g are any improvements planned here?