on 15 Jan 2026 11:56 AM
Summary: This article provides a straightforward, step-by-step guide to setting up a new Premium High Availability (PHA) cluster. The official public document is available for reference. This guide is designed to simplify the installation process by including visual aids, such as screenshots, to make it easier to follow.
Before setting up a Premium High Availability (PHA) cluster, it is essential to ensure that several prerequisites are met. The following checklist outlines the necessary steps and considerations to prepare your environment for a successful deployment:
Confirm that your Dynatrace license includes Premium HA (PHA) capabilities. If PHA is not enabled, contact your Customer Success Manager (CSM) or Customer Solutions Engineer (CSE) to arrange for this feature to be activated. Refer to the documentation for more information.
Ensure that all hardware and system requirements are fulfilled before installation. This includes verifying that the servers and infrastructure meet or exceed the minimum specifications outlined in the Managed hardware requirements documentation.
It is critical to configure cluster node ports for bi-directional connectivity. Without proper bi-directional communication, cluster nodes won't be able to interact as required, leading to failures during installation and operation. Refer to the Managed Network Ports documentation for detailed configuration steps.
Verify that the cluster can establish a connection to Dynatrace Mission Control. This connection is necessary to validate the license, monitor cluster health, and perform other essential functions. Note that Premium HA clusters require an online connection to Mission Control and are available only for Managed online clusters. See documentation.
Check whether SELinux is enabled or disabled on all nodes. If SELinux is enabled, it must be set to ENFORCING mode for managed services to function correctly. Consult the Dynatrace Managed SELinux documentation for guidance on configuring SELinux.
A Premium HA cluster requires a minimum of six nodes, with three nodes located in each data center. The maximum supported configuration is thirty nodes, with up to fifteen nodes per data center. Ensure that your planned deployment meets these requirements to guarantee proper operation and high availability
Begin the installation process by deploying the first cluster in Datacenter 1. This example uses three nodes for DC1. Please follow the standard installation procedure as outlined for your specific environment.
Once all three cluster nodes are installed in DC1, proceed with setting up Premium High Availability (HA) as per the documentation. During this stage, ensure that backup remains disabled, especially if it was previously enabled, until the entire setup process is fully completed.
It is essential to confirm that nodes across both DC1 and DC2 can communicate with each other. To verify connectivity, execute the following command from a host in DC2, replacing with the appropriate IP address:
curl -k https://<DC-1-node-IP>/rest/health
Set the required environment variables on every node in both datacenters. Below is a template for the variables you need to configure:
Check whether custom.settings exist for Cassandra and Elasticsearch. For new installations, this file may not be present. If it does exist, contact Dynatrace support for guidance on required changes.
On the Datacenter 1 nodes, execute the following command to reconfigure Elasticsearch for Premium HA. You can monitor its progress using tail -f nohup.out
sudo nohup $DT_DIR/installer/reconfigure.sh --only els --premium-ha on &
Copy the installer setup to all DC2 nodes. This can be done by navigating to the CMC, selecting 'Deployment status', then 'Add cluster node', and using the provided wget command to download the installer on each DC2 node.
Before proceeding, set the datacenter name for DC2. Run the following command from the DC1 SEED node, ensuring variables $DC2_NAME and $NODES_IPS are correctly set as per step 3:
curl -ikS -X POST -d "{\"newDatacenterName\" : \"$DC2_NAME\", \"nodesIp\" :$NODES_IPS}" https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/datacenterTopology?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json"
The command output will confirm the operation's success.
Open the required firewall rules for DC2 nodes by executing the following command from the DC1 seed node:
curl -ikS -X POST -d "$NODES_IPS" https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/clusterNodes/currentDc?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json"
Take note of the REQUEST ID generated by the above command, as it will be required in subsequent steps. Set this as an environment variable on the SEED node:
export REQ_ID=<Generated REQUEST ID>
Verify that the firewall rule has been applied; the status should be 200
Install the DC2 nodes one at a time—don't perform installations in parallel. Wait a few minutes after each node installation before proceeding to the next. Use the following command on each DC2 node:
sudo /bin/sh ./managed-installer.sh --install-new-dc --premium-ha on --datacenter $DC2_NAME --seed-auth $API_TOKEN
After installing all DC2 nodes, check the nodekeeper status for DC2 by executing the following command from the Seed Node in DC1:
curl -ikS https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/nodekeeper/healthCheck?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json"
Initiate Cassandra replication from DC1 to DC2 by running the following command from the DC1 seed node. The response should include a status of 200 and will generate a Request ID, which should be set as an environment variable on the SEED node.
curl -ikS -X POST https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/cassandra/currentDc?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json"
Check the replication status from the DC1 Seed Node; the status should show as 200
To initiate replication with DC2, use the command below from the SEED node. Set the new Request ID as an environment variable and check the replication status for DC2, ensuring a 200 response
curl -ikS -X POST https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/cassandra/newDc?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json”
Check the Replication status from the Seed node for DC2; this should be 200 status.
curl -ikS -X GET https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/cassandra/newDc/$REQ_ID?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json"
Rebuild Cassandra on the DC2 node by executing the following command from the SEED node. This rebuild might take a few minutes to hours, depending on the database size
curl -ikS -X POST https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/cassandra/rebuild?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json"
Monitor the rebuild process using. The rebuild is complete when the response contains "error:false".
If "error:true" appears, consult Dynatrace support.
curl -ikS -X GET https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/cassandra/rebuild?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json"
Start Elasticsearch replication in DC2 nodes by running the following command from the SEED node. This will generate a REQUEST ID to be set as an environment variable
curl -ikS -X POST https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/elasticsearch?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json"
To monitor progress, run The replication should show 200 status and the response should look like the screenshot below
curl -ikS -X GET https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/elasticsearch/$REQ_ID?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json"
To further verify data replication, execute it. It should return a 200 status and a message as "Elasticsearch Migration is finished..."
curl -ikS -X GET https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/elasticsearch/indexMigrationStatus?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json”
Begin the server migration by starting OneAgent and NGINX on the DC2 nodes. Run the following command from the SEED node to start the migration and generate a REQUEST ID, which must be set as an environment variable to verify cluster readiness:
curl -ikS -X POST https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/server?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json"
To check cluster readiness, execute the command below. This will take some time to show 200 as it is starting the services in the DC2 nodes
curl -ikS -X GET https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/server/$REQ_ID?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json”
Finally, migrate nodekeeper by executing the following command from the DC1 Seed node.
curl -ikS -X POST https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/nodekeeper/currentDc?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json”
Check the migration status using. Allow some time for the status to update to 200, confirming successful migration
curl -ikS -X GET https://$SEED_IP/api/v1.0/onpremise/multiDc/migration/nodekeeper/currentDc/status?Api-Token=$API_TOKEN -H "accept: application/json" -H "Content-Type: application/json”
Note: All this activity needs a stable connection. If ports are not opened correctly, the setup will fail in the middle, and you may need to start again or rebuild the datacenter.
If this article did not help, please open a support ticket, mention that this article was used, and provide the following:
What to read next