Skip to main content

We recently have implemented Commvault backup for MS SQL servers. Most of the backups are using the default subclient.

However, I have run across some reading previously that for big and/or critical database like 1 TB in size, it would be good idea to create additional subclient. I think something if backup fails, it has something to do with restart all databases again? Is that correct? Can someone shed some lights on this?

Let’s say I have 30 databases in a server, if one fails backup, the other 29 needs to be rerun from the start?

Please advise

Thanks

Hi @JohnCV 
 

Based on the outcome of the previous phase, we will only need to retry the databases that were not successfully completed. This typically applies when encountering a data commit error, prompting a retry or conversion in the case of log or differential backup.

 

Implementing multiple subclients enables backup concurrency, which can have both advantages and disadvantages depending on your SQL server specifications.

 

If you have a significantly large database, it might be beneficial to assign it to its own subclient. This allows it to be backed up in parallel with other databases in the instance. However, it's important to consider that doubling the number of backups will also increase the resource requirements accordingly.


Thanks Emils for the prompt response.

Do you think 1 TB database is large?

Implementing multiple subclients enables backup concurrency, which can have both advantages and disadvantages depending on your SQL server specifications. if I have a different schedule policy schedule to address that specific subclient different than the default one, I would think that will work.

Please let me know.

Thanks you all


1 TB is considered standard. Separate schedules will reduce resource contention.

 


Reply