13 Jul 2020 01:53 PM - last edited on 25 May 2023 09:46 AM by Karolina_Linda
Multi-node Managed-Cluster deployed at banking customer site, where some nodes are in Production Data Centre while some in DR Data Centre.
This customer also has same IP & same Hostname, for Banking Servers in both DR data centre as well as Production data centre.
Currently, the way this customer perform DR-simulation is by cut-off the connection between Prod Data Centre and DR Data Centre, then spawn-up all server instances in DR data centre and make sure servers and services in DR Data Center are running fine.
Thus, unlike a real DR where Either Only Prod or DR are capturing data, in this DR-simulation both side are also capturing data during simulation.
So before cut-off, data is captured and analyzed fine. During cut-off, Prod Nodes of Managed Cluster are capturing data on its own, DR Nodes of Managed CLuster also the same. The question comes after cut-off is finished: "When DR-Simulation Finished, both the PRod Nodes and DR Nodes of Managed Cluster able to talk back to each other. They would realized both side also captured different data for the same period of time, so what would the data looks like for that period of time?"
1. Would it be empty and no-data?
2. If not empty, then it would use the data captured by Prod Nodes? or DR Nodes?
3. If none of the above, then how does Managed decide what data to use?
Best Regards,
Wai Keat
Solved! Go to Solution.
13 Jul 2020 02:04 PM
In case of node unavailability, Cassandra stores data "on a side" using so called "hinted handoffs" (https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/operations/opsRepairNodesHintedHandoff...)
It is able to store up to 3 hours of data that are not synchronized. After that data will be overwritten and lost.
After nodes are up again, data from hinted handoffs will be synchronized to missing nodes.
If the cut-off is longer than 3 hours, additionally cassandra would require a repair to be run, to make the data consistent back again.
If outage is longer than 3 days, then nodes have to be rebuilt as cassandra will remove unavailable nodes.
See also failover mechanism: https://www.dynatrace.com/support/help/setup-and-configuration/dynatrace-managed/basic-concepts/dyna...
What's also risky here, is that I assume they have equally balanced Prod and DR DCs. For example, it's 3 nodes in Prod and 3 nodes in DR. If that's the case, they no longer have majority of nodes up when "cut-off" is done, and data cannot be stored/read properly. To address that scenario look at the Premium HA offering here: https://www.dynatrace.com/support/help/setup-and-configuration/dynatrace-managed/basic-concepts/dyna...
14 Jul 2020 08:10 PM
Hi @Radoslaw S.
First few facts for our case:
Now the questions are:
Thanks in advance
Yos
15 Jul 2020 11:41 AM
1. That will be hard. I'd check how much you utilize between nodes in a DC - and it should be similar.
2. 15ms should be fine. The issues that arise with longer latency is data consistency and performance.
3. Yes, That's embedded in Cassandra.
4. Usually a cassandra db repair (nodetool repair) is needed to synchronize data between nodes. Visible issues might be gaps in data. We will work on running that automatically in the near future.
5. A cassandra db rebuild might be needed (nodetool rebuild) to synchronize the data from scratch on the node that was lost.
Moreover... you need to aware that if you cut-off 3 nodes out of 6 you can lose some data as data is stored with replication factor of 3 (there's a chance you cut-off nodes that contain all replicas of a given data). What could help here is to set up rack-aware set up so cassandra and elasticsearch are aware of nodes location and will not store all copies of data in a single location. That will be supported in the near future.
15 Jul 2020 12:24 PM
Thanks @Radoslaw S. for the detailed answer.
We will wait for the update on supporting rack-aware.
All the best and stay safe.
Yos
15 Jul 2020 12:41 PM
Racks configuration will be possible if you can set up 3 racks. Otherwise 2 racks will be vulnerable to split brain (as running just 2 nodes) - then the only turnkey solution providing fault-tolerance is Premium HA.
21 Jul 2020 05:53 PM
Hi Radoslaw,
When you mentioned "If the cut-off is longer than 3 hours, additionally cassandra would require a repair to be run, to make the data consistent back again. "
How to do/run the repair to be exact?
Thanks.
Best Regards,
Wai Keat
21 Jul 2020 06:25 PM
/utils/cassandra-nodetool.sh repair -full -dc
However, I'd rather not run that on your own, unless you have small size of metric storage. This operation might take very long and is resource consuming. In such case, I'd rather recommend contacting with Dynatrace ONE to open a support case so we can assist you.
In the near future we will automate that process so this will not be required.
23 Jul 2020 07:05 AM
23 Jul 2020 08:44 AM
you are right. I've updated my comment with proper "-full -dc" option that is the only way.
As I mentioned in my comment as well, I don't recommend running that without Dynatrace assistance.