Skip to main content
Solved

NAS CIFS share backup access nodes usage


Forum|alt.badge.img+9

Maybe 

How are mutliple acces nodes used in NAS CIFS share backups? Load-balancing or failover or both? Is it possble to configure this? I did not find this in CV documentation. Thanks at all.

Best answer by Ledoesp

I configured some time ago with several HyperScale nodes acting as NFS access nodes. When you provide a list of access nodes the first one acts as coordinator, you can see there is a DistributedIda log were the coordinator talk with the other nodes and assign them some data and streams to proceed.

I guess if coordinator is down the job on next resume will be handled by the next in the list, but I never wanted to shut down the HyperScale node to test. As several subclients were created, I tried to engage with a different coordinator changing the order of the access nodes per subclient so role was split between all nodes in a balanced way.

View original
Did this answer your question?

3 replies

Aplynx
Vaulter
Forum|alt.badge.img+13
  • Vaulter
  • 291 replies
  • October 8, 2021

I think the default is 10, but it’s also dependent on network\machine performance as I’ve seen calls where running to many access nodes will overload the backup operation and cause the backup to be slower then it will run with less nodes at once. You’ll probably want to test and see what the source location and network can handle.

 

https://documentation.commvault.com/commvault/v11_sp20/article?p=107204.htm


Forum|alt.badge.img+9
  • Author
  • Byte
  • 50 replies
  • October 8, 2021

Thank you @Aplynx . Question was:

How are these access nodes used? Load-balancing or failover or both?


Forum|alt.badge.img+14
  • Vaulter
  • 227 replies
  • Answer
  • October 8, 2021

I configured some time ago with several HyperScale nodes acting as NFS access nodes. When you provide a list of access nodes the first one acts as coordinator, you can see there is a DistributedIda log were the coordinator talk with the other nodes and assign them some data and streams to proceed.

I guess if coordinator is down the job on next resume will be handled by the next in the list, but I never wanted to shut down the HyperScale node to test. As several subclients were created, I tried to engage with a different coordinator changing the order of the access nodes per subclient so role was split between all nodes in a balanced way.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings