when going from windows disklib to S3 compatible cloudlib (Netapp) whats the best block sizes to use?

  • 9 February 2022
  • 1 reply

Badge +1

We have a client with DiskLib using 128kb block and CloudLib on S3 compatible Netapp storage using 512kb block size.  I know that the S3 compatible can be set lower unlike a true cloud bucket but what are the optimal settings in this case?


I know from testing that Commvault have confirmed there is up to a 30 times performance gain having same block sizes for AUX copy vs reinflation going from one size to another but what would be the best case if this were to be rebuilt, 128/256/512 both sides?


TIA Karl


Best answer by William Dennehy 9 February 2022, 16:13

View original

If you have a question or comment, please create a topic

1 reply

Userlevel 1
Badge +4


Thank you very much for asking this question. You are 100% right; you will have a huge performance increase by using the same database block size. When configuring the Cloud library database, the DDB block size needs to be kept the same as the primary dedupe database.

As stated on our documentation:

Block Size

“The secondary copies associated to cloud storage libraries use the same block size as the primary copy. For example, if primary copy uses 128 KB, then secondary copies also use 128 KB. For backup operations that back up directly to cloud storage libraries, secondary copy uses 512 KB.”


Since changing the database block size requires the DDB to be sealed, most customers do not have enough free space on the disk library to lay down a new base line. Most customers opt to seal the cloud DDB and change it to 128 to match the primary. If you can seal the primary, you are free to increase the primary DDB to 512 to match the Cloud database. Just keep in mind you will see less dedupe saving on 512 than 128. But you will have increase performance from 512. If you want the best dedupe saving on the local disk, then it’s recommended to keep it to 128 block size.