Solved

Partitioned DDBs Best Practice

  • 29 December 2022
  • 3 replies
  • 412 views

Badge +4

Hi Guys,

 

We are using partitioned DDBs running on 2 media agents in the environment on the Azure VMs. 

Our current Deduplication setting on GDSP level is to seal and start a new DDB in case of any DDB corruption instead of Recover Pause and Recover current DDB. Also the option to “Allow jobs to run to this copy while at least 1 partition is online” is not selected. 

 

Just wanted to check what is the best practice for partitioned DDBs as per Commvault.

CS : Azure VM

Primary Library : Azure BLOB

Secondary : Metallic

icon

Best answer by Damian Andre 30 December 2022, 01:52

View original

3 replies

Userlevel 6
Badge +15

Good afternoon.  I would say that the setting to seal in case of corruption is best setting as you do not have to worry about reconstructions and DDB verifications which take a lot of time and can increase charges.  As for the option to “Allow jobs to run to this copy while at least 1 partition is online”, that is personal preference.  Not having this selected does not allow you to keep running jobs to the other partition if one goes down.  

Badge +4

Good afternoon.  I would say that the setting to seal in case of corruption is best setting as you do not have to worry about reconstructions and DDB verifications which take a lot of time and can increase charges.  As for the option to “Allow jobs to run to this copy while at least 1 partition is online”, that is personal preference.  Not having this selected does not allow you to keep running jobs to the other partition if one goes down.  

Reconstruction and DDB verifications : How it will increase the charges ?
Also if DDB is sealed a new baseline will be created won’t that increase the cost if it is cloud ? Azure BLOB in our case..

Userlevel 7
Badge +23

There isn’t really a best practice here - it’s a trade off. If you seal, you will incur more storage charges and each block will be seen as new. Old blocks won’t age off until every job meets retention. 
 

the alternative is to reconstruct which means backups don’t work until the job is completed, so you may miss a few hours or more of new backups waiting for the reconstruction to complete.

 

personally since cloud costs can be expensive, I’d recommend the reconstruction option and enable garbage collection (may require upgrading the DDB) for optimal performance.

Reply