a customer raised an issue I'm not sure about it. The thing is that they are concerned about their backup policy, I mean, they don't make at once cluster node backups but they have som,e delay gap between them, so they are concerned about probablies restores issues if it is needed.
Question: I suppose that Dynatrace is clever enouth to synchize the latest backup copy and to extend it to the rest of cluster nodes. Correct me if I am wrong.
Thanks in advance.
Solved! Go to Solution.
I assume they don't use Dynatrace backups and do that on their own? That's no the best approach here, as it may lead to some incompatibilities and failures in restore. I recommend using the in-product backup feature to have a guarantee of success.
How Dynatrace backup works is that Metric storage is dumped at a given time and Elasticsearch storage is creating snapshots from all nodes every hour. Each node should be connected to the NFS (the NFS disk should be mounted at the same shared directory on each node). The Dynatrace server process should have read/write permissions for the NFS. The protocol used to transmit data depends on your configuration. We recommend NFSv4. We don't recommend CIFS.
I have a followup question on this, if you don't mind. We are in a similar situation, with a customer's own backup process (full + incremental) already in place. Our main interest in a recovery scenario would be to get back the Dynatrace configuration so that the monitoring could be enabled again as soon as possible. History data, metrics etc. is not important. With that in mind, do you have any further info on this "It might not work due to data inconsistency" - does it mean that the recovery would fail as a whole, or that we'd simply be missing some monitoring data due to clusters not being synched?
Kalle, we already have configuration only backups since version 182. I strongly recommend to switch to in-product backups.
Ok, will do 🙂 Is there any size estimate for the conf only backups? The documentation only talks about Cassandra and ElasticSearch. I know it won't likely be much, but it would be nice to have some idea...
We did short tests and our Cassandra config took about 500GiB whereas with metric data it was about 1TiB. But ... plenty of space was taken also by elasticsearch snapshots which you could delete manually when you switch to config-only.
It really depends on your environment.
Moreover you can disable user sessions backup to save more space.
Hmm, so if I understood this correctly; the Cassandra backup would normally contain both the metrics data and the config data... But if I select the option config only, the metrics part is omitted. In your example case, the metrics portion was about 50 % of the total Cassandra backup?
So if the document states:
The estimate of the required backup size is based on the metrics storage size. Typically, it's 20% of the sum of the metrics storage on all the nodes (the number of nodes doesn't affect the formula).
-> For config only, I'd use 10 % instead of 20 %?
The reason is that configuration doesn’t grow as much as metrics.
Ok, thanks again for the update. I can report my own findings to this discussion once I've managed to get that NFS mount in place.
Thanks, but customer already have a scheduled backup of the whole VCenter and for them to follow Dynatrace backup supposes to duplicate efforts and disk space.
So, ccording my question, what happens after a whole restore of all Dynatrace nodes when its Cassandra and Elastic is not exactly the same (until 2/3 hours of difference), is it clever enough to align all nodes with the latest versions of them?
It might not work due to data inconsistency. This is not a supported backup/restore scenario.
if you are more interested on creating backups of all your Dynatrace configuration I suggest to use our tool Composer. We've developed Composer to simplify our Dynatrace Admin life and uploaded it to the hub https://www.dynatrace.com/hub/?query=composer
Please check it out. Within minutes you can set your Dynatrace configuration under version control.