Skip to main content
Question

MCSS Library performances


Forum|alt.badge.img+4

Hello,

 

I have a matter that could require some help.

We have a primary copy on a disk Library, a secondary on another disk Library and a third copy on a MCSS cloud Library. All copies have dedup enabled.

Auxiliary copies run fine between the disk librairies but when it comes to sending the data to the MCSS library it takes forever. We tried to increase the number of streams, use either of the disk libraries as source for the aux copies but we can’t achieve suitable performances. What we see is that the process generate excessive read operations on the source library. 

Dedup block size is 128 on the disk libraries and 512 on MCSS.

Commvault version 11.24.

Any help would be appreciated.

Regards,

 

Jean-xavier

12 replies

Forum|alt.badge.img+15
  • Vaulter
  • 630 replies
  • January 24, 2023

Good afternoon.  I would recommend to run Cloud Test Tool against the MCSS library.  There is an option for Metallic storage in Cloud Test Tool which will pull your credentials from the CommServe database automatically.

https://documentation.commvault.com/2022e/expert/9232_testing_connectivity_with_cloud_test_tool.html


Forum|alt.badge.img+4
  • Author
  • Byte
  • 10 replies
  • January 25, 2023

Hello,

Thank you for your answer but there is no such MCSS option in the 11.24 version.

 


Forum|alt.badge.img+4
  • Author
  • Byte
  • 10 replies
  • January 27, 2023

Does anybody know if the dedup block size could have an impact?

 


Scott Moseman
Vaulter
Forum|alt.badge.img+18
jxbarb wrote:

Does anybody know if the dedup block size could have an impact?

 

Absolutely.

https://documentation.commvault.com/11.24/expert/12411_deduplication_building_block_guide.html

“We recommend you to use default block size of 128 KB for disk storage and 512 KB for cloud storage. If cloud storage is used for secondary copies (that use disk copies as source), then we recommend you to use same block size as the source copy.”

Thanks,
Scott


Forum|alt.badge.img+4
  • Author
  • Byte
  • 10 replies
  • January 30, 2023

Hello Scott,

 

I read that but I’m in the process of understanding how this could have an impact on the read activity on the source Library.

Changing the value of the parameter would have a big impact on the infrastructure so I’d rather be sure...


Scott Moseman
Vaulter
Forum|alt.badge.img+18
jxbarb wrote:

I read that but I’m in the process of understanding how this could have an impact on the read activity on the source Library.

 

If the source (disk) is 128 KB and the destination (cloud) is 512 KB, you need 4x the reads from the source for each write to the destination.  It’s not as efficient as aligning the block sizes.

Thanks,
Scott


Zubair Ahmed Sharief
Vaulter
Forum|alt.badge.img+11

Hello @jxbarb 

Have you attempted to use the same MA’s for primary copy and MCSS copy? 


Forum|alt.badge.img+4
  • Author
  • Byte
  • 10 replies
  • February 2, 2023
Zubair Ahmed Sharief wrote:

Hello @jxbarb 

Have you attempted to use the same MA’s for primary copy and MCSS copy? 

Hello,

 

Yes I did.

We also ran some tests with a new MCSS library and dedup block size aligned with disk library and we didn’t notice any improvement.


Zubair Ahmed Sharief
Vaulter
Forum|alt.badge.img+11

@jxbarb :  It looks like a network issue. Could you attempt to check the available bandwidth. If the network link is shared with other apps. What route it is taking to the MCSS, is there a firewall or network device which could slow down the speed of connection even after having enough bandwidth?


Forum|alt.badge.img+4
  • Author
  • Byte
  • 10 replies
  • February 2, 2023

There is no network issue.

What we see is some massive queue lengths on the disk where the disk library belongs that happens only during the aux copy to MCSS.


Zubair Ahmed Sharief
Vaulter
Forum|alt.badge.img+11

@jxbarb : Massive Disk queue length are definitely a problem to look into.

If you a test /dev environment, you can try changing the dedup block size as suggested by @Scott Moseman. That should help in resolving the issue. 


Forum|alt.badge.img+4
  • Author
  • Byte
  • 10 replies
  • February 3, 2023
Zubair Ahmed Sharief wrote:

@jxbarb : Massive Disk queue length are definitely a problem to look into.

If you a test /dev environment, you can try changing the dedup block size as suggested by @Scott Moseman. That should help in resolving the issue. 

@Zubair Ahmed Sharief We already created a 128k MCSS Library and ran some tests without any noticeable improvements. We might need to test with more data to have some relevant inputs.


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings