Solved

Auxiliary copy jobs slow performance

  • 29 May 2022
  • 6 replies
  • 1052 views

Badge +2

Hi  All,

After installing a new library (NAS with CIFS share) with new storage policy (no compression , no encryption) –

Backup jobs are running fine, but aux copy jobs (replication to the DR site) are much slower than before , about x10 times slower.

We checked the network performance between MA and repository and between MA production and MA DR site – all working well.

What can be the problem? Is there any fine tuning for the aux copy wo work faster?

Thanks.

icon

Best answer by Laurent 30 May 2022, 12:02

View original

6 replies

Userlevel 4
Badge +10

Hi @Eyal 

On the new library, right click on one of the mount paths and select Validate Mount Path → add the media agent associated with the Library to the dropdown and select the below parameters.

 

This will launch a validation job that will benchmark the read/write performance to the new library.

 

The results will pop-up in the Console. Once it completes, send a screenshot of the results. I suspect you may be seeing slow read performance from the new library but this test will help pinpoint the issue.

Userlevel 6
Badge +15

Hi @Eyal ,

Were you using deduplication before ? In that case, you could have had DASH copies to your DR site, which would accelerate a lot the copies after the initial ones, as only new unique blocks would be sent over the network when not present at the destination.

 

Badge +2

Hi Matt,

please find attached screenshot of the results from the new library configured (CIFS Share) 

Badge +2

Hi Laurent,

we are using dedupe now.

We cannot use dash copy because we don’t have the exact capacity on the DR site and also we need a different retention period on both sites , so we are using aux copies.

Userlevel 6
Badge +15

Thanks for the feedback, @Eyal .

Check @Damian Andre answer on this topic to check for auxcopy setting 

 

Maybe you could also check setting of the copy to your DR site is network optimized copy instead of disk optimized copy in Deduplication tab of this SPC?

Or maybe if you’re using deduplication on a new SP and new DL, then it can take a bit of time to have the initial copy of data complete..

 

Userlevel 6
Badge +15

..and I saw your other post where you mentionned using ‘device replication’ for your copy instead of using commvault’s controller copy.

Honestly, I wouldn’t encourage you to follow that path, and one of the reasons would be that your data integrity would rely on a new extra layer, and it could be a real mess in case you NEED to use that ‘replicated’ data. 

Also, you maybe would not be able to have a granular way to prioritize, control bandwidth, and view replicated backup jobs using device controlled replication instead of Commvault’s. Very few appliances consume less bandwidth than Commvault would using fully optimized dash copies. 

As adviced by @Onno van den Berg , engage Support to understand the lag caused by this new device/configuration.

Reply