Skip to main content

Hi Guys,

 

I would like to know whether there are recommendations on the block size of the cloud library?

 

We have a Cloud Storage in our data center, and we would like to use it for backup. On the storage, we have the ability to choose the block size. Do we need to specify the block size or keep it default (32 KB).

 

Note: On disk library, we are used to formatting our local drive to 64 kb, however we didn’t find anything for cloud libraries.

Thanks in advance.

 

Best Regards

@Adel BOUKHATEM just keep it default and speaking of cloud storage we do not really speak about block size but about object size. by default Commvault uses 8MB object size for deduplicated operations and 32MB for non-deduplicated operations. 
 

I would suggest you to discuss this with the supplier of the object storage solutions. It might be that you will have to consider for example use multiple buckets/data paths. 


Hi @Onno van den Berg ,

Thanks for your response. The object storage supplier doesn’t know much about Commvault.

However, he says that the object size is set to 32kb.

In which case should I consider multiple buckets?

Note: I would like to achieve alternate data path for secondary copy which will be using the Object Storage through two Media Agents.

 

Regards

 


Please share the name of the solution….

Adding additional MAs to the same cloud library is no problem and is a very easy task. 


@Onno van den Berg It is a Huawei Object Storage.

What is the best to do, creating a bucket for each MA or only one bucket for both of them?

Don’t really know how to achieve that(Out of topic question).


We ourselves use Cloudian Hyperstore and for their implementation it is recommended to create at least 5 buckets per cloud library. If you do not know how to create a bucket than I would call in some help from your colleagues. And of you were referring to how to add additional buckets to a cloud library →  please lookup the documentation or contact your partner for help. 


I would like to know more about the object sizes used. It has been stated that when using dedupe the object size is 8MB. The default blocksize in commvault for the DDB is 512KB. So was just wanting to ensure the 8MB object size was indeed the size of the object being sent to the S3  endpoint when using dedupe? Also, what are some of the advantages of using multiple buckets from a commvault perspective? Is having more better because of how it establishes sessions, threads, etc? For admin simplicity, I would just like to have one bucket per endpoint that all my media agents would use, but if there are some performance advantages I would sure like to know about those considerations. Thanks for any input.


@Mark W unfortunately the amount of technical information how Commvault works is not available online. You might reach-out to your account team to get your hands on documentation or following a technical Commvault course.

In regards to the amount of buckets. This is currently a best practice in case you use Commvault in conjunction with Cloudian Hyperstore and is a recommendation coming from the end of Cloudian. It really depends on the cloud storage solution being used. If you are using AWS S3 than you just use a single bucket without issues.


@Mark W unfortunately the amount of technical information how Commvault works is not available online. You might reach-out to your account team to get your hands on documentation or following a technical Commvault course.

In regards to the amount of buckets. This is currently a best practice in case you use Commvault in conjunction with Cloudian Hyperstore and is a recommendation coming from the end of Cloudian. It really depends on the cloud storage solution being used. If you are using AWS S3 than you just use a single bucket without issues.

OK thanks, definitely not using Cloudian.


Reply