Skip to main content

Hello,

 

I have a matter that could require some help.

We have a primary copy on a disk Library, a secondary on another disk Library and a third copy on a MCSS cloud Library. All copies have dedup enabled.

Auxiliary copies run fine between the disk librairies but when it comes to sending the data to the MCSS library it takes forever. We tried to increase the number of streams, use either of the disk libraries as source for the aux copies but we can’t achieve suitable performances. What we see is that the process generate excessive read operations on the source library. 

Dedup block size is 128 on the disk libraries and 512 on MCSS.

Commvault version 11.24.

Any help would be appreciated.

Regards,

 

Jean-xavier

Does anybody know if the dedup block size could have an impact?

 

Absolutely.

https://documentation.commvault.com/11.24/expert/12411_deduplication_building_block_guide.html

“We recommend you to use default block size of 128 KB for disk storage and 512 KB for cloud storage. If cloud storage is used for secondary copies (that use disk copies as source), then we recommend you to use same block size as the source copy.”

Thanks,
Scott


I read that but I’m in the process of understanding how this could have an impact on the read activity on the source Library.

 

If the source (disk) is 128 KB and the destination (cloud) is 512 KB, you need 4x the reads from the source for each write to the destination.  It’s not as efficient as aligning the block sizes.

Thanks,
Scott


Good afternoon.  I would recommend to run Cloud Test Tool against the MCSS library.  There is an option for Metallic storage in Cloud Test Tool which will pull your credentials from the CommServe database automatically.

https://documentation.commvault.com/2022e/expert/9232_testing_connectivity_with_cloud_test_tool.html


Hello,

Thank you for your answer but there is no such MCSS option in the 11.24 version.

 


Hello @jxbarb 

Have you attempted to use the same MA’s for primary copy and MCSS copy? 

Hello,

 

Yes I did.

We also ran some tests with a new MCSS library and dedup block size aligned with disk library and we didn’t notice any improvement.


@jxbarb :  It looks like a network issue. Could you attempt to check the available bandwidth. If the network link is shared with other apps. What route it is taking to the MCSS, is there a firewall or network device which could slow down the speed of connection even after having enough bandwidth?


There is no network issue.

What we see is some massive queue lengths on the disk where the disk library belongs that happens only during the aux copy to MCSS.


@jxbarb : Massive Disk queue length are definitely a problem to look into.

If you a test /dev environment, you can try changing the dedup block size as suggested by @Scott Moseman. That should help in resolving the issue. 


Does anybody know if the dedup block size could have an impact?

 


@jxbarb : Massive Disk queue length are definitely a problem to look into.

If you a test /dev environment, you can try changing the dedup block size as suggested by @Scott Moseman. That should help in resolving the issue. 

@Zubair Ahmed Sharief We already created a 128k MCSS Library and ran some tests without any noticeable improvements. We might need to test with more data to have some relevant inputs.


Hello Scott,

 

I read that but I’m in the process of understanding how this could have an impact on the read activity on the source Library.

Changing the value of the parameter would have a big impact on the infrastructure so I’d rather be sure...


Hello @jxbarb 

Have you attempted to use the same MA’s for primary copy and MCSS copy? 


Reply