Skip to main content
Solved

Auxcopy optimization


Forum|alt.badge.img+2

Hello community , 

We are trying to migrate SAN storage to S3 cloud library .

Per suggestions followed these steps .
 
1.    configured new global dedupe storage policy using your new S3 bucket and MA
2.    configured new secondary copies in your existing storage policies pointing to your new S3 dedupe storage
3.    ran aux-copy

We have huge data and contacted commvault support to determine the timelines to know when the aux copy will get complete . Currently aux-copy is running for more than 4 months.

Support has mentioned below points .
-Your current configuration is allowing the selection and prioritization of new backups over older data
-You are also configured to copy all data to cloud and mentioned we are not using dedupe for aux copy. 

How to make sure have optimal Aux copy configurations

 

 

Please share your inputs . 

Thanks in advance

Spartan9

Best answer by MNRunner

Hi @Spartan9 ,

β€œSpace Optimized Aux copy” is mostly the issue here.  With this setting enabled (which the case here), jobs will be processed in specific order and this causes DASH copies to run slow.  I have seen this issue many times, especially during lifecycles like this.

 

I would kill current job, disable this setting and start new DASH copy.  This will consume little more disk space in your target library.  But please enable it back after both copies are in sync.

 

Please keep us posted about result.

View original
Did this answer your question?

7 replies

Damian Andre
Vaulter
Forum|alt.badge.img+23

-You are also configured to copy all data to cloud and mentioned we are not using dedupe for aux copy. 

 

Hey @Spartan9 - are you sure deduplication is not in use? You’d have to have a huge internet pipe to transfer non-deduplicated data to S3. I’d argue that you’d never get this completed in most cases if deduplication is not enabled. Can you share a screenshot of the job progress window (blur out anything confidential).

Have you tried increasing the number of streams for the auxcopy - another option is to try check/toggle between network/disk optimized modes in the deduplication settings on the copy. Simply kill and restart the job for the new settings to take effect. 


Forum|alt.badge.img+2
  • Author
  • Byte
  • 6 replies
  • August 12, 2021

Hello Damian ,

 

Thank you for reply .

I’m sure we are using the dedupe and not sure why support has mentioned . 

What is the process to increase the steams for the auxcopy.

 

 

 

 

Thanks,

Spartan9


  • 25 replies
  • Answer
  • August 12, 2021

Hi @Spartan9 ,

β€œSpace Optimized Aux copy” is mostly the issue here.  With this setting enabled (which the case here), jobs will be processed in specific order and this causes DASH copies to run slow.  I have seen this issue many times, especially during lifecycles like this.

 

I would kill current job, disable this setting and start new DASH copy.  This will consume little more disk space in your target library.  But please enable it back after both copies are in sync.

 

Please keep us posted about result.


Forum|alt.badge.img+2
  • Author
  • Byte
  • 6 replies
  • August 13, 2021

Hello @MNRunner ,

 

Thank you .

Sure will keep you posted .

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

Hey @Spartan9 , hope all is well!

How did @MNRunner β€˜s suggestion work out?


Forum|alt.badge.img+2
  • Author
  • Byte
  • 6 replies
  • September 9, 2021

hi @Mike Struening yes some what better and we are able to complete one of the AUX copy job.


Mike Struening
Vaulter
Forum|alt.badge.img+23

Glad to see our members have helped!!!  Also glad to have you join us :sunglasses:


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings