08 Sep 2017 12:45 AM - last edited on 24 Feb 2023 02:48 AM by Karolina_Linda
We have multiple DCs and we need to deploy Dynatrace Managed. I would like to understand the deployment scenario and H/W, SW requirement for the same. Say for example if I have 2 DC , it will require 2 Security GWs and 1 Managed Server (if we have connectivity between two DCs). Please validate my understanding.
Solved! Go to Solution.
For agents running on-premise (same datacenter as your Dynatrace Managed server) you don't need a security gateway. However, you will need a security gateway for each external datacenter to point those agents to it. Here's a diagram which might help:
Perfect!!! Got it , Thanks a lot
In your diagram, if I build a DR Dynatrace Managed in Australian DC, and fail over Dynatrace Managed to Australian DC, all of agent in German will point to Australian one individually? So based on this, my understanding is for DR architecture, we better have a Security Gateway in each DC. Correct?
Hi Charles. You are correct, a gateway will be required to ensure optimal connectivity when failover happens.
please note that documentation says you cannot have a Dynatrace cluster span across multiple timezones:
For Dynatrace Managed installations with more than one node, all nodes must:
So I'm not sure how this would actually work out for you if you are facing this situation 😞
@ Radu - Thanks Radu for your confirmation in DR architecture to have optimal connectivity by using a SG in both DCs. I was thinking to have a Global URL resolved by DNS which will be pointed by both SGs, so when a failover happens, we just need to manually change the DNS record, so SG will automatically re-point to the active one.
@ Gil - A Dynatrace cluster with multiple nodes act in a HA model, and these nodes are in same DC in a same timezone, so it should not be conflict with what are required in documentation.
I would think moving from Managed to SaaS would simply things. All we need are just 2 SGs. The only concern is the heavy traffic between SGs and SaaS.
Actually as I experimented, it doesn't really work the way as expected in the diagram. OneAgent in a DC may not report to its local SGW in same DC, instead it reports to the cluster or SGW in a different DC, which seems that we have no control over it. OneAgent has all of clusters and SGWs in its configuration file automatically, and OneAgent will try to connect each of them. It would be good that if we can tell OneAgent to try the local SGW or cluster first, then remote cluster. I had to shut down all of SGWs to avoid confusion by OneAgents in multiple DCs. Hope Dynatrace Vendor can improve it.