Skip to main content
Solved

Auxiliary copy jobs slow performance


Forum|alt.badge.img+2

Hi  All,

After installing a new library (NAS with CIFS share) with new storage policy (no compression , no encryption) –

Backup jobs are running fine, but aux copy jobs (replication to the DR site) are much slower than before , about x10 times slower.

We checked the network performance between MA and repository and between MA production and MA DR site – all working well.

What can be the problem? Is there any fine tuning for the aux copy wo work faster?

Thanks.

Best answer by Laurent

Thanks for the feedback, @Eyal .

Check @Damian Andre answer on this topic to check for auxcopy setting 

 

Maybe you could also check setting of the copy to your DR site is network optimized copy instead of disk optimized copy in Deduplication tab of this SPC?

Or maybe if you’re using deduplication on a new SP and new DL, then it can take a bit of time to have the initial copy of data complete..

 

View original
Did this answer your question?

6 replies

Forum|alt.badge.img+10

Hi @Eyal 

On the new library, right click on one of the mount paths and select Validate Mount Path → add the media agent associated with the Library to the dropdown and select the below parameters.

 

This will launch a validation job that will benchmark the read/write performance to the new library.

 

The results will pop-up in the Console. Once it completes, send a screenshot of the results. I suspect you may be seeing slow read performance from the new library but this test will help pinpoint the issue.


Forum|alt.badge.img+15
  • Byte
  • 386 replies
  • May 30, 2022

Hi @Eyal ,

Were you using deduplication before ? In that case, you could have had DASH copies to your DR site, which would accelerate a lot the copies after the initial ones, as only new unique blocks would be sent over the network when not present at the destination.

 


Forum|alt.badge.img+2
  • Author
  • 5 replies
  • May 30, 2022

Hi Matt,

please find attached screenshot of the results from the new library configured (CIFS Share) 


Forum|alt.badge.img+2
  • Author
  • 5 replies
  • May 30, 2022

Hi Laurent,

we are using dedupe now.

We cannot use dash copy because we don’t have the exact capacity on the DR site and also we need a different retention period on both sites , so we are using aux copies.


Forum|alt.badge.img+15
  • Byte
  • 386 replies
  • Answer
  • May 30, 2022

Thanks for the feedback, @Eyal .

Check @Damian Andre answer on this topic to check for auxcopy setting 

 

Maybe you could also check setting of the copy to your DR site is network optimized copy instead of disk optimized copy in Deduplication tab of this SPC?

Or maybe if you’re using deduplication on a new SP and new DL, then it can take a bit of time to have the initial copy of data complete..

 


Forum|alt.badge.img+15
  • Byte
  • 386 replies
  • May 30, 2022

..and I saw your other post where you mentionned using ‘device replication’ for your copy instead of using commvault’s controller copy.

Honestly, I wouldn’t encourage you to follow that path, and one of the reasons would be that your data integrity would rely on a new extra layer, and it could be a real mess in case you NEED to use that ‘replicated’ data. 

Also, you maybe would not be able to have a granular way to prioritize, control bandwidth, and view replicated backup jobs using device controlled replication instead of Commvault’s. Very few appliances consume less bandwidth than Commvault would using fully optimized dash copies. 

As adviced by @Onno van den Berg , engage Support to understand the lag caused by this new device/configuration.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings