23 Dec 2021 01:19 PM
Hello,
Is there a problem using the Azure OneAgent extention in the scaling set instead of the kubernetes operator?
In other words, installing a oneAgent on a kubernetes kluster host, why not?
KR Henk
23 Dec 2021 01:24 PM
@henk_stobbe we only leverage the kubernetes Cluster connection via the API integration while the Oneagent gets deployed out to the Nodes/Pods/Containers. We would love to get more observability and functionality as we all have come to know and love with the Dynatrace Oneagent.
23 Dec 2021 02:35 PM - edited 23 Dec 2021 02:38 PM
Found another thread about the )-;:,
https://community.dynatrace.com/t5/Dynatrace-Open-Q-A/Kubernetes-Monitoring/m-p/175620#M18913
In simple terms, its the Dynatrace way vs the kubernetes way (-; So no difference except when Host are scaled in or out,
KR Henk
23 Dec 2021 02:44 PM
@henk_stobbe So Dynatrace also changed their methodology of the Oneagent and Operator Deployment. The current standard is to leverage an AG that is created via a POD on the node. Which is a good idea for small use cases and quick spin ups for Proof of concepts, but I'm not sold on it as an enterprise solution.
Using the older method you point the operator to communicate via a designated AG, so all the communication targets one AG, then goes into the Dynatrace UI, Thus only needing one firewall to be opened for that AG Communication.
But with the new standard, if Nodes/Clusters are going to have an AG Pod, then we need to list all those IPs for the networking team to allow the communication. Which I feel like is a step backwards, as a core fundamental of the AG is to bundle all the traffic and send it along.
So I'm not sold on it, we will be stickling with the Oneagent Operator deployment with Dedicated AGs and Cluster APIs