I am completely new with Dynatrace environment and am still struggling with some concepts.
I am using OpenShift 4.5 to deploy Dynatrace Oneagent, using Dynatrace operator. Everything is running as intended, but what I do not understand is why are so many dependencies set up on underlying node (e.g. certificates, there are also quite some binaries set under `/opt/dynatrace/oneagent/agent/bin`)? Shouldn't those be part of image or in case of e.g. certificates mounted via configMap/secret?
Reason why I am asking this is since the idea of running OpenShift/Kubernetes on CoreOS type of OS is that there no dependencies set on underlying host and I am worries this might interfere with update process of OpenShift 4 platform.
Thank you and best regards,
Solved! Go to Solution.
Hi @Bostjan B.,
I assume you have deployed the OneAgent operator and it deployed the OneAgent via container on the nodes - probably the most common way on OpenShift. Files are put on the node to be able to inject code modules into other processes running in containers and on the host itself.
I'm not sure about your question regarding certificates, as they are not involved in the deployment of the OneAgent itself, but operator uses https connection to the Dynatrace API (either to the cluster or to the ActiveGate) to fetch the installers.
If the question stands for why there are some certificate filtes in the /opt/oneagent directories, then it's because the same OneAgent is also used for non-kubernetes platforms.
There are also more deployment strategies that are available with OneAgent Operator 0.8.0 such as automated runtime injection using admission controllers.
See more here:
Hello @Julius L.
thank you very much for provided information. That means that if I use default deployment strategy (using Oneagent CRD), pods created by that DaemonSet trigger fetching of those modules/wrappers so they are available for processes running on nodes right? And are then Oneagent pods responsible only to keep those modules up to date or do they also orchestrate whole Dynatrace collection logic on underlying host?
I guess if I do not need host monitoring, I could use OneagentAPM then, which would run just as sidecar and would not touch host's filesystem right?
Maybe one more subquestion - do you know what Red Hat stance on this is? Since as far as I know, there should be no changes to underlying host unless it is done via image/MachineConfigs.
Best regards, Bostjan
For the 'default' strategy - the OneAgent DaemonSet injects into your application pods, and also has access to metrics on the host. The Operator is what keeps the DaemonSet up to date. The /opt directory is on a shared volume so that each pod can use the same binaries without having to download / install them each and every time.
If you do not care about host monitoring, then the OneAgentAPM will work as you say, without the shared volume, though there is this kubectl apply my-app.yml peformance hit as described above, since the agents must be unzipped and installed for each pod.
Red Hat is one of our largest partners. We work with them, jointly, on both technical and sales opportunities, and collaborate closely with their engineering teams. Their stance is that monitoring is critical and there are always trade-offs. If you want host metrics, you need access to the host.
We are exploring alternative ways of mounting volumes across pods. We are also aware of some custom configurations at a few of our larger customers who are isolating these mounts by namespace using either their own, or some third-party solutions.
Thank you very much for info and sorry for late reply.
Maybe just a follow-up question regarding OneAgent DaemonSet - what do you mean with injecting into application pods? I went through docs and I still have problem fully understanding how everything works. This is what I currently have:
What I am missing is:
Hi @Bostjan B.,
How are applications accessing `/opt/dynatrace` on underlying host? Since pods require permissions to mount `hostPath`. I also checked some pods and at least in configuration, they do not mount that
It's on the host. Injecting into applications is performed by the host by preloading a library to each process. Processes in a container are still processes on the host, just running in a separate cgroup.
How to mark application for monitoring? Or is this done automatically as soon as you start DaemonSet and this is configured on Dynatrace side?
Not sure what you mean by that. As soon as OneAgent is injected into your processes, instrumentation happens and tracing and metrics are collected.
Hello @Julius L.,
ah ok, that makes sense. I was not sure how libraries are injected into application, now I understand.
Regarding second question, it was more on the side how is it specified to which application libraries are injected. As far as I understand, this is then done for all started pods, including platform pods (e.g. etcd container).
Exactly. For each process started on the host. Depending on the technology used in the process, the library then chooses the right instrumentation means (Java/NodeJS/PHP... ). This is a key differentiator from other deep monitoring tools doing instrumentation.