cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Need guidance on Best Practice for ActiveGate Container on Kubernetes connecting to Dynatrace SaaS

Hi Dynatrace team and community,

I am currently deploying Dynatrace OneAgent in a Kubernetes cluster, along with an ActiveGate container (running as a Pod). I have two different traffic flow designs for how the ActiveGate connects to Dynatrace SaaS, and I would like your recommendation on which one is the best practice or officially supported.

Picture 1

AskMeSolutions_0-1751528403786.png

  1. Each Worker Node sends OneAgent data to the ActiveGate Container

  2. The ActiveGate Container then forwards the data to the Environment ActiveGate

  3. Finally, the data is sent to Dynatrace SaaS directly over HTTPS (port 443) 

Picture 2

AskMeSolutions_1-1751528412607.png

  1. Each Worker Node sends OneAgent data to the ActiveGate Container

  2. The ActiveGate Container sends data to Dynatrace SaaS

My Questions:

  1. Between Picture 1 and Picture 2, which one is the best practice when deploying ActiveGate as a container on Kubernetes?

  2. In the case of Picture 2 (proxy):

    • Is it acceptable and supported to have the ActiveGate container send data via a proxy?

    • Do all Worker Nodes need outbound proxy access, or only the nodes running ActiveGate?

  3. Are there advantages or trade-offs in terms of security, scalability, maintenance, or performance between the two designs?

Thank you in advance for your support and recommendations
(I’ve attached both Picture 1 and Picture 2 diagrams for clarity.)

3 REPLIES 3

Julius_Loman
DynaMight Legend
DynaMight Legend

Case 1 is officially unsupported. You must not route AG traffic via another Environment AG. 
On the other hand traffic via HTTP proxies is possible and supported.

Best practice is to deploy AG into the K8S environment as part of the Dynatrace Operator deployment and route OA traffic to SaaS using this ActiveGate(s) (standard behaviour). If an HTTP proxy is required for outbound communication, for example due to network policies, it can be configured in DynaKube directly.

Certified Dynatrace Master | Alanata a.s., Slovakia, Dynatrace Master Partner

Mizső
DynaMight Guru
DynaMight Guru

Hi @AskMe-Solutions 

As a fall back communication I always implement a fw rule between the worker nodes and Saas or Managed 443 (if it is allowed). If something would happen the AG you still have OA information.

Best regards,

János

Dynatrace Community RockStar 2024, Certified Dynatrace Professional

Hi all, and thank you @Julius_Loma and @Mizső for your previous insights.

I have a follow-up scenario based on the discussion here.

We are running an on-premise Kubernetes environment where:

AskMeSolutions_0-1751958064588.png

  • The in-cluster ActiveGate container cannot access the internet, so it cannot send data to Dynatrace SaaS directly.

  • We also cannot define an HTTP proxy in the DynaKube CRD (due to policy restrictions or lack of proxy infrastructure).

Given that both outbound direct access and proxy-based access are not allowed, what are the available options to make Dynatrace work in this kind of environment?

Specifically:

AskMeSolutions_3-1751957736242.png

  • Can we route data from OneAgent (inside the K8s cluster) through the internal network to an external Environment ActiveGate (e.g., deployed in a DMZ or management network that has internet access)?

Any guidance, architecture recommendation, or documentation regarding air-gapped or network-restricted Kubernetes environments would be greatly appreciated.

Thanks in advance!

Featured Posts