cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

/opt/dynatrace not available in daemonset pods until pod is deleted/respawned

kirk_dahl
Newcomer

When pods are started, you can perform the following: 

kubectl exec -it {pod} -n {namespace} -- ls /opt 

 

And you will see the dynatrace directory. This is needed for nginx monitoring as a shared object in this directory is loaded into nginx via the main-snippet config. 

 

However, when a node is started, all daemonsets start at the same time and there is no ordering or precedence by the scheduler. Therefore, the oneagent is not always up before other pods. After a node starts up, if you delete the daemonset pod, the new pod created will have the /opt/dynatrace directory in it and available for nginx (which is running as a daemonset too) . Unfortunately, we've tested with initContainers to delay the start of the nginx container unsuccessfully. This is because the POD is aleady started, its not the starting of containers in the pod or their restart that will pull in the /opt/dynatrace. It must be the POD. The pod does not have a restart policy on failure. Searching the Kubernetes KEP there is mention of adding ability to order daemonsets with taints and controllers, or having the ability to have Pods restarted on container failure. There is also mention of adding preStarts. My next steps will be to taint the nodes via kubelet until critical DS (dynatrace) is started before others can start.




2 REPLIES 2

ChadTurner
DynaMight Legend
DynaMight Legend

Very interesting read.

-Chad

Julius_Loman
DynaMight Legend
DynaMight Legend

Afaik this depends on the deployment option and it happens with the classicFullStack and it's resolved in the cloudNativeFullStack which, unfortunately, has other limitations at the moment.

Certified Dynatrace Master | Alanata a.s., Slovakia, Dynatrace Master Partner

Featured Posts