I have 25 hosts unit license in my Dynatrace Managed that is fully used for all DC Servers, but my customer need to monitor some of DR Servers too when DRP test.
I have a plan to installing oneagent in the some DR servers, and then when the application want to switch from DC to DRC, I will disable monitoring DC servers from environment UI first, and then the license will allocated automatically to DR servers, is that right ?
Is that my plan is possible ? can I installing oneagent in some servers when my license fully used ?
Solved! Go to Solution.
You can install more OneAgents without any issues, just they won't be able to connect to the cluster.
Just be careful, there is no option to "stick" licenses to important hosts (DC in your case). Actually, if you shut down those servers the DRC servers should pick up the license. However unless you shut down DRC servers, DR servers - won't be able to connect. So you might end up with a mixture of both DC and DRC servers at the same time.
So the safest way at the moment is to disable and enable those hosts. Either in UI or via API.
Also depending in your license, you may have overages for the host units enabled. In such case, you could temporarily increase your monitoring consumption over the host units you have.
Regarding your comment "there is no option to "stick" licenses to important hosts". Since it sounds like you know a bit about how this works - if I try to enable monitoring for new hosts while I have already met the HU limit and don't have overages allowed... Does it simply not then monitor the new hosts? But the old monitoring will still work like previously, i.e. the new monitoring won't basically "steal" host units from the old ones?
For the DR scenario, I would imagine one challenge is how to handle Deep Monitoring. So let's say DR environment kicks in -> you manually disable DC hosts monitoring at Dynatrace and enable monitoring at DR -> now you have a long list of processes pending a restart, right?
This is based on my experience at various customers. As far as I can remember it worked this way for me, but I would not guarantee that.
It's best to turn off hosts which should be turned off anytime. Luckily you can now leverage the API to turn on/off any host. With 25 HU and for just one DR test, it might not make sense to use API, but for larger environments or turning off/on regularly it's a pleasure.
For this statement "just they won't be able to connect to the cluster", is this mean the new host is listed in the deployment status, but nothing data come from this new host ?
Exactly. It will be disabled (no change in the settings for the host - UI will show enabled) due to quota limit just a minute or two after they connect.
You can see a message on the host screen that it is unmonitored due to license limit and probably also a notification (I'm not sure about that).
So in my scenario, I want to install oneagent on DR first before DR test started, and when DR test will started, disabled some DC hosts and DR hosts will take the license and automatically in monitored status, is that right ?
Also for the calculated license, is Dynatrace take some times to recalculate license when I disabled some hosts ?
Theoretically yes, but I'd recommend installing OneAgents on DR site first and immediately disabling them.
After the DR procedure is flipped over, disable the production hosts and enable to DR hosts. Most likely your licence won't allow , even temporarily, to consume more HU than you have. So you will have to disable hosts before enabling others.
I have a question more about this DR scenario. My customer give some info about this, when DR scenario start, the IP address of DC server will given to DRC host, so in this case oneagent in DC server will not connect to Dynatrace Cluster. So, when DR scenario run araound 1 week, and switch back to DC, is the oneagent DC server can automatically connect to Dynatrace Cluster again ?
@Muhammad N. I thnk I don't fully understand your DR scenario. But your hosts can be disabled / enabled as long as they connected at least once to the cluster.
The DR scenario is like this, when the system is running in DC environment, the server will have production IP eg 10.100.100.10, the standby DRC server will have non production IP eg 126.96.36.199. And when syatem start running in DRC environment, this production IP 10.100.100.10 will assign to DRC server, and DC server will using non production IP 188.8.131.52 that is no firewall rule to connect to Dynatrace Cluster port 443.
So in this condition, maybe DC server not connected to DT cluster when system running in DRC environment (1 week running) even when the DC server is UP, right ?
Back to my question, based on this condition, does oneagent in DC environment will connect again when DRP scenario switch back to DC ? Or oneagent have timeout system when trying connect to Dynatrace Cluster ?
It depends on your network routing. Normally in such DR scenarios, only a float IP (VIP) is reassigned. I guess the DR server does have an IP assigned all the time, just in case applications are switched to DR server, it will have multiple IP addresses.
The best way is to try to reach your cluster or ActiveGate from the DR server. If it's reachable you can install it right away and immediately disable in UI after it connects.
Agents do have regular polling of all available Dynatrace server and ActiveGate endpoints. So if it loses connection for whatever reason, OneAgent tries to reconnect in a loop.
The applications are running in DR now, and I have an oneagent in the DR server and I can monitor. But as I said before, now DC servers that have oneagent can't connect to the DT cluster because non production IP can't reach the DT cluster.
So based on your statement "Agents do have regular polling of all available Dynatrace server and ActiveGate endpoints. So if it loses connection for whatever reason, OneAgent tries to reconnect in a loop" it is oneagent always tries to reconnect ? And when applications switched back to DC (also DC with production IP) will automatically connect ? No need to restart oneagent service ?