Skip to main content

I’m backing up a few NAS subclients that are between 10-30TB each.

Currently they are scheduled to do a synthetic full backup every weekend with incremental backups every weeknight.

These are then aux copied off to tape.

Because of the very low change rate in reality probably 95% of what’s written to tape each week is exactly the same as the week before.

The speed of the aux copy is becoming an issue as I don’t seem able to aux to two drives but drop down to a single drive when other backup jobs need the second drive.

So my question is whether I’m stuck in the past in a traditional GFS mindset and I should look at the “Automatic Synthetic Full” backup option?

All backups are kept on disk and tape for a minimum 30 days and extended retention keeps monthly full backups for a year.

We have a stupidly quick media agent with an NVMe dual partitioned DDB so in theory we’re doing best practise so the synthetic fulls only take an hour or two each but the 5 days auxing the lot to tape is painful and is starting to feel overkill.

Thanks in advance 😀

Hello @Paul Hutchings 

Thanks for the great question! Assuming the amount of tapes you are using is not your concern but the amount of time it takes to move the data to Tape there are a number of performing tuning features for tape drives to get them to run faster. 

If you are reading with one stream and writing with one stream you will find tape can out perform any disk drive, even the fastest. Disk gets to go faster than tape because you can read with multiple streams when tape is restricted to one writer per drive. This is where the feature multiplexing comes in. You can read with multiple streams from a disk drive and then consolidate it into one writer stream for the tape. Tapes can normally write at 100MB\s ( 400GB\PH ) at worst and can go much high in some cases. Depending on your source storage having a multiplex factor of 5 can drastically improve your performance. 

NOTE: Multiplexing is great but it does have the take away your restores will take longer as the data is mixed around and excess reads will occur. 

NOTE NOTE: Commvault cannot read data with more streams than it was written with, when running your synth fulls you need to make sure they are using more than 1 stream or multiplexing wont do anything. 

If you want to drill down and confirm where the bottle neck is I recommend using the “CVPerfMgr.log” on the MA that is writing the data. Run an Aux copy for 1-2 hours and suspend it and have a look at that log. It gives a full summary per stream about the performance. 

https://documentation.commvault.com/2023e/expert/data_multiplexing_overview.html

https://documentation.commvault.com/2023e/expert/configuring_multiple_streams_for_synthetic_full_backups.html


Kind regards

Albert Williams


Reply