Solved

Auxcopy optimization

  • 11 August 2021
  • 7 replies
  • 1196 views

Badge +2

Hello community , 

We are trying to migrate SAN storage to S3 cloud library .

Per suggestions followed these steps .
 
1.    configured new global dedupe storage policy using your new S3 bucket and MA
2.    configured new secondary copies in your existing storage policies pointing to your new S3 dedupe storage
3.    ran aux-copy

We have huge data and contacted commvault support to determine the timelines to know when the aux copy will get complete . Currently aux-copy is running for more than 4 months.

Support has mentioned below points .
-Your current configuration is allowing the selection and prioritization of new backups over older data
-You are also configured to copy all data to cloud and mentioned we are not using dedupe for aux copy. 

How to make sure have optimal Aux copy configurations

 

 

Please share your inputs . 

Thanks in advance

Spartan9

icon

Best answer by MNRunner 12 August 2021, 22:08

View original

7 replies

Userlevel 7
Badge +23

-You are also configured to copy all data to cloud and mentioned we are not using dedupe for aux copy. 

 

Hey @Spartan9 - are you sure deduplication is not in use? You’d have to have a huge internet pipe to transfer non-deduplicated data to S3. I’d argue that you’d never get this completed in most cases if deduplication is not enabled. Can you share a screenshot of the job progress window (blur out anything confidential).

Have you tried increasing the number of streams for the auxcopy - another option is to try check/toggle between network/disk optimized modes in the deduplication settings on the copy. Simply kill and restart the job for the new settings to take effect. 

Badge +2

Hello Damian ,

 

Thank you for reply .

I’m sure we are using the dedupe and not sure why support has mentioned . 

What is the process to increase the steams for the auxcopy.

 

 

 

 

Thanks,

Spartan9

Hi @Spartan9 ,

“Space Optimized Aux copy” is mostly the issue here.  With this setting enabled (which the case here), jobs will be processed in specific order and this causes DASH copies to run slow.  I have seen this issue many times, especially during lifecycles like this.

 

I would kill current job, disable this setting and start new DASH copy.  This will consume little more disk space in your target library.  But please enable it back after both copies are in sync.

 

Please keep us posted about result.

Badge +2

Hello @MNRunner ,

 

Thank you .

Sure will keep you posted .

 

Userlevel 7
Badge +23

Hey @Spartan9 , hope all is well!

How did @MNRunner ‘s suggestion work out?

Badge +2

hi @Mike Struening yes some what better and we are able to complete one of the AUX copy job.

Userlevel 7
Badge +23

Glad to see our members have helped!!!  Also glad to have you join us :sunglasses:

Reply