I'm working in a Dynatrace Managed customer where they cannot use the URL dynatrace-managed.com provided, due security polices (public external URL with internal IPs), we'll need to create an internal URL pointing to Dynatrace cluster (5 nodes) in this case I know we will need an URL and a valid certificate to operate HTTPS transactions, but my question is I'll need to set a VIP address pointing to cluster nodes?
if yes (I need to do this with or without persistence? and to each ports, just 443 or 8443 and 9999 too?);
if no (is there any document that says how Dynatrace server makes the failover and load balancing processes without a VIP address? just using simple DNS resolution)
Solved! Go to Solution.
It's actually simple. It does just load balancing using the NGINX web server (if you are on v136 or newer). So you will just need to setup DNS to point to all 5 hosts of the cluster. This however will not protect you from failed requests during upgrades if you will be using just the node being upgraded or from failure of a node. Upgrades are quite fast, there is a little chance of inaccessible UI.
If setting up a VIP and load balancing it using external load balancer such as F5 is suitable for you, I'd recommend to do so. If it's a big deal, then just use the DNS load balancing and live with its shortcomings.
In this case is not a big deal to use a load balancer (A10 networks are available), do you know if I need to set the VIP just to port 443, or I need to include 8443 and 9999 too?
As the load balancing is done by NGINX WS, do you think will be a problem to configure VIP to provide persistent sessions?
You need just the 443 for UI. Agents connect directly to 8443 and are load balanced automatically. They have addresses of all nodes and gateways in the config.
Session stickiness is done also on NGINX level, you should not have to worry with it at the physical balancer.
Just wanted to follow what Julius just said about "They have addresses of all nodes and gateways in the config.". Due to this configuration for the load balanced automatically, I found out that the OneAgent reported to the Gateway in another data center because the agent doesn't know that Gateway is in a different data center. Is there any option that allows us to tell OneAgent to report to Dynatrace Managed node directly instead of GW in same DC or not to report to GW in a different DC?
AFAIK there is currently no control over preference at the same level. Agents just prioritize environment ActiveGate to cluster ActiveGate and cluster ActiveGate to cluster node.
But there is no control which gateway or cluster node is preferred. You have to use firewall rules or other means to disallow agents to connect to gateways which agent should not connect to.
Understood. I could not find a better way than shutting down the gateway in another DC to force that agent to report to the cluster or GW in same DC. This is not a perfect design in multiple DC environment. Also using firewall is not an ideal solution because we need agents to be able to fail over reporting to cluster nodes or GWs in another DC. The dev team may need to consider this as an improvement opportunity if you expect a company to use Dynatrace as a standard APM tool widely deployed everywhere.