Skip to main content
Question

HyperScale X Disk failure tolerance EC 4:2

  • March 18, 2025
  • 1 reply
  • 37 views

Forum|alt.badge.img+4

Hi,


The link on our doc could do with a little more explanation as I've been asked this questions many times now :https://documentation.commvault.com/11.38/expert/resiliency_on_hyperscale_x.html

 

is the below still correct? : 

HSX Cluster with EC 4:2 erasure coding (3 nodes or 6 nodes for example) can tolerate a total of 2 disk failures across the cluster or a single node ...

One would assume a 6 node or more cluster could survive more than 2 disks in total going offline? 

Shut down / restarting:

Erasure Code

Block Size

(Nodes / Block)

Resilience Limit

Comments / Description

(4 + 2)

3 Nodes

1 node per block

Shutdown/reboot only one node at a time. Ensure Commvault services are up on the rebooted node before rebooting the next node in the block.

6 or more Nodes

Up to 2 nodes per block

Can shutdown/reboot up to two nodes at a time. Ensure Commvault services are up on the rebooted nodes before rebooting the next node in the block.

shutting down 2 nodes on a 6 or more node block - will all new writes be blocked or appended on the other nodes then re-distributed once the other nodes are back online? 

1 reply

R Anwar
Vaulter
Forum|alt.badge.img+10
  • Vaulter
  • 115 replies
  • March 22, 2025

Hi ​@RobsterFine 

One would assume a 6 node or more cluster could survive more than 2 disks in total going offline? 
You can have two node down in a 6 node cluster.

shutting down 2 nodes on a 6 or more node block - will all new writes be blocked or appended on the other nodes then re-distributed once the other nodes are back online? 
The writes will have no issues as long as space is available. However, the overall available space need to be subtracted with the data written on the two nodes which are down.

Regards,


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings