Is there a problem using the Azure OneAgent extention in the scaling set instead of the kubernetes operator?
In other words, installing a oneAgent on a kubernetes kluster host, why not?
@henk_stobbe we only leverage the kubernetes Cluster connection via the API integration while the Oneagent gets deployed out to the Nodes/Pods/Containers. We would love to get more observability and functionality as we all have come to know and love with the Dynatrace Oneagent.
Found another thread about the )-;:,
In simple terms, its the Dynatrace way vs the kubernetes way (-; So no difference except when Host are scaled in or out,
@henk_stobbe So Dynatrace also changed their methodology of the Oneagent and Operator Deployment. The current standard is to leverage an AG that is created via a POD on the node. Which is a good idea for small use cases and quick spin ups for Proof of concepts, but I'm not sold on it as an enterprise solution.
Using the older method you point the operator to communicate via a designated AG, so all the communication targets one AG, then goes into the Dynatrace UI, Thus only needing one firewall to be opened for that AG Communication.
But with the new standard, if Nodes/Clusters are going to have an AG Pod, then we need to list all those IPs for the networking team to allow the communication. Which I feel like is a step backwards, as a core fundamental of the AG is to bundle all the traffic and send it along.
So I'm not sold on it, we will be stickling with the Oneagent Operator deployment with Dedicated AGs and Cluster APIs