20 Nov 2019 11:24 PM - last edited on 09 Jun 2021 04:28 AM by MaciejNeumann
Backup of a Dynatrace Managed Cluster with more than 1 node requires shared directory mounted at all nodes of the cluster at the same directory (NFS is recommended). Is the backup process resilient to outages of the mounted remote filesystem?
For example - if the NFS server is non-reponsive - can this affect the normal Dynatrace cluster node operation? (of course, backups cannot be done during the outage)
Are there any options for disabling the backup temporary? (for maintenance on NFS server) except for turning the backup on/off in the CMC?
Solved! Go to Solution.
backup is triggered every 1 hour for elasticsearch snapshot from the database. In case of NFS unavilability, this operation will time out and not affect normal operation of the server or any other components.
Backup is also triggered every 24 hours for cassandra dump. This is implemented as a separate script launched on each node. And again, if NFS is unavailable script will just fail to execute, or will run very long until times out.
There's no option to pause the backup. Frankly, it's not so needed, as in worst scenario you will miss cassandra daily dump, but the previous backup should still be available.
Hello @Július L. and @Radoslaw S.
I am a little confused regarding the storage path.
Do we need to provide the physical storage path (e.g. 10.10.10.10:/dynatrace-ns/dynatrace-bkt) or the mounting point (e.g. /cluster_configuration_backup) created on all cluster nodes)?
This should be the mounting point that was created across all cluster nodes for example /usr/local/dynatrace.
@Malik R. is correct. We had done the same thing. But Dynatrace might still list it as incorrect but the backup will still perform as expected
why would Dynatrace list this as incorrect?
Hey @Radoslaw S. this was back at my previous employer where we were a managed instance. For some strange reason Dynatrace would always have yellow under text from the mount point location. I dont quite recall what it stated, but the CMC didnt really like the mount point but never the less back backed up every night right on time. I can always reach back out to my contacts over there to see if the issue is still present, maybe get you a screen capture of it.
Correct. It just needs to be the same mount point across all nodes.
@Babar Q. That's correct. Dynatrace might say that it dosent like it, but it should still work. Or at least thats what it did back when we set it up as managed 2 years ago.
Yes, that is exactly what you will need to do. As @Július L. mentioned it needs to be the same mount point across all nodes. https://www.dynatrace.com/support/help/setup-and-configuration/dynatrace-managed/operation/back-up-a... example dynatrace used "On each target node, mount the backup storage to, for example,
/mnt/backup. This path is referred to as
Hello @Július L., @Chad T. and @Malik R.,
Thank you for the confirmation and assistance.
Hello @Radoslaw S. @Július L. @Chad T. @Malik R.
According to the documentation, 'The user running Dynatrace services needs read/write permissions for the shared file system.' If I am not wrong, by default the services are running with the user dynatrace. Is my understanding correct?
The user dynatrace has full permisssion on the shared file system, but I am getting the below issue.
Is that set on all nodes?
pls paste `ls -la` so I could see how it looks like with perms
Hello @Radoslaw S.
Please find the below information.
I am getting the permission deniced with the root user while using the command ls -la
@Babar Q. are you using CIFS by a chance?
Such problems may be caused by the caching in CIFS - that is why we recommend using NFS and not CIFS. The issue happens less often when "peristenthandles" and "actimeo=0" params are used when CIFS is mounted.
Nothing else comes to my mind. Please open a support ticket, we'll try to troubleshoot this individually.
In the documentation the following appears:
In this case, we have to disable backups manually. It's explained in the bottom of the page how it's done: