Skip to main content

Hello Community

 

 

I am currently troubleshooting an issue with high IO during SQL backups. Upon investigation, I found the IO wait type to be BACKUPIO. Further analysis of the Commvault performance log revealed that writing to the media is slow, which is consistent with the issue we found using a PowerShell script to determine I/O wait type (BACKUPIO).

Currently, our storages live in Azure, with three copies (primary, 2nd, and 3rd). 
The primary and 2nd copies have deduplication enabled, with a block level ddb factor of 512 of each. 

To achieve a better balance between write performance and recoverability, Perf log recommends adjusting the block size from the data path properties to 128KB or 256KB.

My question is whether I need to change the block level ddb factor to 128KB on both the source and destination cloud lib, and if so, what is the proper procedure to adjust the value? Do I need to put the MA in maintenance mode and ensure that no jobs are running in the DDB/MA, then reboot the server after the change?

Finally, after making the change, will both DDBs be rebaselined? I plan to manually run a full backup for all associated storage policies, or will the associated backups convert to full automatically?

also Increasing the block size for disk libraries may increase the memory consumption on the cvd process. what percentage of mem will be increased ?

thank you

 Hi @DanC 

My understanding is that if you’re backing up directly to cloud then the recommendation is for the block size to be 512KB

“For a complete cloud environment where all copies use cloud storage, we recommend to use default block size of 512 KB.”

 


@Shaun Hunt  thank you


Reply