Skip to main content
Question

Large MS SQL restore - best practices

  • July 30, 2025
  • 1 reply
  • 51 views

Forum|alt.badge.img+5
  • Commvault Certified Expert

Hello Team

 

What are best practices for MS SQL backup and especially restore for big databases in terms of splitting DB into multiple files (SQL side)?

 

Let’s assume we have 10 TB MS SQL database. What is better:

  1. Having 1 file for whole DB
  2. Having 10x 1 TB files
  3. Having 160x 64 GB files?

And why?

Let’s assume we can have 4-16 backups streams for this DB.

It’s question about performance of backup/restore/DB itself, not about management and handling with files or underlying storage raid distribution/layout per DB/file.

 

Kind regards,

eMF

1 reply

Forum|alt.badge.img+4
  • Vaulter
  • August 5, 2025

Hi Maciek,

 

I am assuming that this is a standard streaming backup of the data we are talking about here. 

 

The number of streams configured in Commvault will dictate the amount of data that will be sent to or received from SQL server concurrently, the number of database files on the SQL is not necessarily a performance concern from our perspective, however there maybe some best practices from Microsoft that you can look into in that regard. I believe the appropriate file size will come down to disk filesystem and the number of volumes those database files are distributed across.

 

During backup and restore, Commvault is simply providing a conduit for SQL to send or receive backup data. SQL will split the data as evenly as possible across the provided streams and create one backup device per stream during backup. As a result the restore streams are necessarily constrained to the same number of streams used for backup.

 

At a certain point increasing the number of streams become counter productive as each additional stream adds resource requirements. There will be a limit at which adding streams causes resource bottlenecks, but that is different in each environment.

 

My usual recommendation is to use 4 streams and increase the Maximum Transfer Size on the subclient properties in Commvault to 4 MB. This will double the memory allocation per stream and allow each read to take a “bigger bite” of the data. You can also increase the number of buffers being used here, which again increases the memory consumption, but it can increase read speed.

https://documentation.commvault.com/v11/expert/enhancing_sql_backup_performance.html

 

I hope that answers the questions.


Regards,

David Dwyer