since you can't add attachments to existing topics, I have created this new topic in answer to Chris' question.
I'm currently working on a VCS integration. Today I have executed a fail-over test, and encountered some issues.
(ip ranges just for the picture, no real addresses)
Questions / remarks
./dynaTraceServer start,both the front-end server and the back-end server will startup. The start times will vary depending on a few variables, cpu speed, memory and system load to name a few.
./dynaTraceServer statusand look for the following:
The license files would be distinct for each machine/node, so you would not share these across the two nodes. In practical terms, there are only a set of relevant directories and files that you should share between the two nodes.
For the dynaTrace Collector, is this running as a plugin collector or is this collector receiving data from agents? It may be a better design to have the collector on a separate host. This would limit the single point of failure and provide more flexibility with the collectors. We would need a better understanding of your specific setup to make further recommendations.
please let me know if this helps or if you have additional questions.
Thanks for answering!
Some answers to your points:
Practically it is very hard running multiple licenses, because we are using UEM, which needs to remain working when switching as well.
This is a relatively simple dynatrace setup with limited number of agents, on a very heavy VCS cluster, the external collector process is running deliberately on the clusternode, as this has more than enough processing power and memory, and all items are in the same network segment.
The only issue I cannot solve myself, is the licensing story, where I need to have 2 servers licenses sharing UEM visits...
My current idea, is to save the license file for both setups, and restoring them on the node which becomes active just before the startup sequence. In this way I can keep my setup with complete shared directories (done to keep it simple and to be sure that all patches will remain active also when switching nodes).
If the scripts are not working when you run them manually, I would suggest opening a support case. I'm positive the lab would want to know. We should be able to get the scripts working as needed.
I think I understand the license issue a little better now. Are you saying that you are using the same shared files on both server? as you state to keep it simple?
You could do some clever things with symbolic links that would allow you to share the directors, as you are doing, then have the license files stored in a directory that is not shared, in the script that activates the new node, you can then have it create the symbolic link to the proper license file for the node. Either way, you would need to come up with a way for each node to read in it's own license files.
I do not think there is anyway around the license being tied to the machine, with the current licenses you are using. We do now offer a usage based model, perhaps this may be an option.
I am now preparing a shell script to link the correct license file, based on the active node. Is is sufficient to only save and restore the dtactivation.txt and dtlicense.key?
I cannot test at this moment, as I haven't received additional license for the second, inactive, node yet.
I believe so, but you may also want to link the following,
Also, you may need to make sure the cmdb.config.xml file is unique for each node. This file maintains the server configuration with hostnames.
you could test having this as a shared file but I have only tested with it being unique.