cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Best Practices for ActiveGate Communication of Kubernetes Clusters in on-prem environment - Dynatrace Managed

agylpradipta
Helper

Hello everyone,

I am planning to implement Dynatrace Managed (On-Prem). The architecture diagram is attached.

agylpradipta_1-1728270634206.png

 

Here’s the current situation:

  • Dynatrace Managed Cluster, App-1, and App-2 are all located in separate 'Virtual Data Centers' (VDC). They can't communicate directly unless VPC peering is set up, allowing certain servers in one VDC to connect to servers in another VDC.
  • Both App-1 and App-2 are Kubernetes on-prem environments, with the teams having set up their own Kubernetes clusters.

My questions are:

  • Is it possible for only the ActiveGates to communicate between VDCs?
  • In this case, monitoring data from the Kubernetes clusters would first be sent to the ActiveGate, which would then forward the monitoring data to the Managed Cluster.
  • If this is possible, should I use an Environment ActiveGate or a Cluster ActiveGate?
  • Or is there a best practice you would recommend for my request?

I would appreciate your suggestions and assistance. Thank you.

5 REPLIES 5

Mizső
DynaMight Guru
DynaMight Guru

 Hi @agylpradipta,

You should use network zones to separate the traaffic.

Network zones - Dynatrace Docs

 

Mizs_0-1728277639092.png

You should use Environement AG. I recommend containerized Env AG in the Kubernetes clusters (containerized AG numbers depends on the size of the Kubernetes cluster), at least one not containerized ENV AG / newtork zone for run extensions. 

Regarding the conatinerized ENV AG, at ClassicFullStack and CloudNativeFullStack you can configure it in the Custom resource yaml. At AppOnly instrumentation you can create manually an ENV AG:  

Manually deploy ActiveGate as a StatefulSet - Dynatrace Docs

I hope it helps.

Best regards,

Mizső

Dynatrace Community RockStar 2024, Certified Dynatrace Professional

Peter_Youssef
Champion

Hello @agylpradipta 

Thanks @Mizső for your valuable inputs as usual.

First of all, you should put careful consideration for the below point as per Dynatrace best practices:

  1. The proper sizing for the cluster nodes to be in the safe size use M - L node size.
  2. Ensure communication to MC is conducted properly.
  3. For K8s monitoring ensure having (1) containerized AG deployed at each K8s cluster, meanwhile you can connect through cluster AG or Cluster as well.
  4. To run extensions without, consider creating network zones and create AG group for each purpose for example (2AGs for extensions), (2AGs for routing Oneagent traffic).
  5. For each K8s cluster, you will create a dedicated network zone to ensure that traffic is going through the expected path through Containerized AG then to the cluster.
  6. Having Cluster AG is one of the best practices, to ensure all external monitoring traffic flowing seamlessly into Dynatrace cluster nodes for processing without causing excessive loads on the cluster nodes. 
  7. As per the current design traffic will flow directly from the AG to the cluster node as there is no Cluster AG.
  8. K8s related configurations can be customized through Dynakube yaml file.
  9. For AG group you can create it either through AG "custom.properties" file or remote configurations through the UI.

Hoping this quick info adds value.

KR,

Peter.

 

Thank you, @Mizső  and @Peter_Youssef, for your feedback.

There are a few things I'd like to confirm regarding the suggestions you've provided:

  1. When you mention the containerized Env AG, are you referring to what's outlined in this documentation: https://docs.dynatrace.com/docs/setup-and-configuration/dynatrace-activegate/activegate-in-container?
  2. As far as I know, there's also an ActiveGate configuration in the dynakube.yaml file, as shown below. Is this the same containerized Env AG that you're referring to?
    agylpradipta_2-1728286184214.png

     

  3. Lastly, can monitoring data from OneAgent in the Kubernetes cluster be directly sent to the ActiveGate mentioned earlier? From what I understand, in Dynakube, the data is sent based on the defined API URL, right? Do we need to modify that part as well?
    agylpradipta_3-1728286218984.png

     

Thank you.

Hello @agylpradipta 

As simple as that:

As per the attached example you will get the idea 

2024-10-07_12h15_05.png

2024-10-07_12h19_39.png

2024-10-07_12h24_10.png

Regarding the last point (3):

  • K8s monitoring data routing you can change the API URL if that's required, otherwise no need.
  • Monitoring data can be routed to the cluster directly with no doubt, as long as there is env AG or containerized AG routing the traffic to the cluster it's the preferred approach.
  • Containerized AG Configurations 

KR, 

Peter.

gopher
Mentor

Hi @Peter_Youssef ,
Above 2 pretty much nail it on the head.  Your network design approach will work. 

The only real architectural consideration is splitting the AG out based on functionality.  This is critical for redundancy and capacity.   

Depending on requirements, you should really consider having a min of 2 'routing' AG for agent traffic capacity and redundancy and at least 1 for running extensions (as mentioned above).  Extensions can be resource intensive.
In addition to this, you might also want to consider an 'API' Active Gate if required for pushing metrics / otel,  pulling agents / images or querying data as this can be resource intensive as well.   


Obviously, size of what your monitoring and budget comes into this, but keep in mind, you will eventually hit a practical limit on the size of node / host and performance issues will kick in if you try and go it all on a single deployment.  Not to mention Life Cycle Managment; you don't want to take out all your monitoring by having to bounce a single routing active gate. 

The rest (including AG functionality) is configuration. 
One thing that will be critical on a deployment like this is using AG Groups and Network zones, this to prevent both the agents and extensions from trying to reach out to the wrong network segments.

'Default' extension groups will run on all Active Gates, until they find one that works.
All Agents by default have DNS and IP entries for all available Active Gates and Dynatrace endpoints.  Same principle above occurs where agents will try and communicate to wherever until successful.   

Making sure that you use Active Gate Groups, configuring agent deployments with network zones and setting 'drop traffic' configurations on your Network Zones will reduce incorrect network traffic and unwanted failed log entries. 

Featured Posts