Can anyone please advise.
I am planning to monitor a Kubernetes cluster but will be installing OneAgent directly on the worker nodes instead of using the recommended Dynatrace Operator.
Will I be missing out on any monitoring functionalities following my approach?
In terms of functionality, it should be the same (RUM, services, etc.)
On the other hand, The Operator handles the lifecycle of the objects, which means it automatically keeps them up to date, reports when a node gets killed due to downscaling (so no problem get created at dynatrace).
Probably the biggest benefit, it automates a lot of things and makes deployment and lifecycle management easier.
Hope that helps.
Many thanks for getting back to me.
The only reason I was considering installing agents manually as opposed to using Dynatrace Operator is because I would like Kubernetes Oneagent traffic to go via an existing Environment activegate and not use the containerized activegate.
Is there away I can avoid use of the Cloud-native activegate and instead use my existing Evironment activegate? Going through the Dynatrace Operator documentation I could not find a way of avoiding the containerized activegate.
Interesting question, I'm actually in the process of setting up something similar myself. First of all, you should be able to chain some ActiveGate communication, but there are limits:
So an Env. AG can talk to a Cluster AG, but not the other way around. And an Environment AG can't talk to another Environment AG.
The big question here is, are those AG pods "routing" and "kubemon", which the Operator creates on k8s, regarded as either Env. AG or Cluster AG? Cluster mgmt console does list them in the category Environment ActiveGate, so does it mean that they can only communicate with a Cluster AG? The documentation says "Pods must allow egress to your Dynatrace environment or to your Environment ActiveGate in order for metric routing to work properly.". That is a confusing statement, because it's unclear whether it's talking about the OneAgent pods or the AG pods. Also, does "your environment" mean a Managed cluster node, Cluster AG, or either one? The doc then mentions "connection to your Environment AG", but in that scenario the AG pod would indeed talk to another Env. AG. I think this part of the documentation should be described with more detail...
So, does the connectivity hierarchy only apply to these separate Environment or Cluster AG installations, or also the k8s AG pods? I suppose I'll find out the answer when testing this, but if someone already knows how it works, I'd appreciate the info!
IMO the use of a containerized EAG is not required at all as long as the existing EAG's are reachable from worker nodes and containers. Optionally, you can use NetworkZones to make sure your OneAgents are connecting to the desired EAG's.
I have successfully tested this in an automated app-only setup using the (legacy) OneAgent operator. I've deployed the Dynatrace operator as well but only for K8s monitoring via the containerized "kubemon" EAG i.e. there is no "routing" EAG container deployed.
In our case we have made sure that both our managed cluster nodes and the EAG's are reachable directly from worker nodes and containers.
I have actually done that exact same deployment you have, i.e. using OA Operator to connect directly to the Managed cluster. However the problem is that OA Operator is currently deprecated, and the documentation suggests to instead use Dynatrace Operator. My problem here is that should I honestly recommend using this deprecated implementation method also for new installations? It doesn't make sense to me. And still, even if we know that this one specific way works, that doesn't remove the need for proper documentation by Dynatrace, and having an understanding of what options are available & what makes sense for each different scenario.
Coming back to this question; Dynatrace Operator indeed automatically deploys the ActiveGate pod called "routing". But the monitoring data can also be sent directly from the OneAgent pods. What is the point of that routing pod, is it just there to bundle monitoring data and thus generate less traffic, or are there some other aspects to why it's always deployed by default - does anyone know?
Moses as to your question "Going through the Dynatrace Operator documentation I could not find a way of avoiding the containerized activegate" -> I think it could be done with this parameter in the cr.yaml:
|routing.enabled||Optional Enable routing functionality|
I'll get back to you after some tests, but maybe someone can already comment on it?
Update: yes, you can disable the AG pod "routing" with that setting. Note that the documentation is wrong! Default value is true, it is not "false" by default:
|routing.enabled||Optional Enable routing functionality.||false|
Managed by container engines.
Dynatrace Kubernetes Monitoring (ActiveGate)
Purpose: collects cluster and workload metrics, events, and status from the Kubernetes API.
Dynatrace routing (ActiveGate)
Purpose: Routes information from all Dynatrace components on the Kubernetes cluster through one point.
For more insight.
That doesn't really answer my question. Dynakube-classic pods can already send monitoring data directly to the ApiURL endpoint. So the routing pod is not really needed in the first place. In the previous OneAgent Operator version, I don't think it was even there (or then I missed it somehow), it just had the OneAgent pods and then the AG pod for Kubernetes API integration. However with Dynatrace Operator, we automatically deploy that routing pod, unless specified with the setting "routing.enabled = false". In what scenario would we even need this? Or is the point just to minimize traffic volumes?
Reading more about this, I found that these are optional features/parameters especially in the presence of E/C ActiveGates (in my opinion). Someone can correct this understanding.