Solved

recopy in immutable s3 bucket

  • 14 September 2023
  • 2 replies
  • 66 views

Userlevel 3
Badge +13

Hello,


We have set up a WORM S3 library in Commvault, specifically for the purpose of second copies.

 

Commvault advises against performing DDB verification on cloud storage. 


I'm curious if it's possible to start a recopy (in case of bad chunks) for this WORM configuration. If so, I'd like to clarify whether, once the recopy is finished, the job will continue to use the same job ID in the S3 immut storage but will reference the newly copied objects, then removing the references to the previous objects associated with the same job.

Furthermore, since we're dealing with immutable storage, how to ensure that the new objects (those that have been recopied) maintain a link or reference to the same job ID within the S3 immutable bucket coz the metadata of the same job also being locked in the first place can't modify or delete.
 

thanks

icon

Best answer by Collin Harper 14 September 2023, 18:14

View original

2 replies

Userlevel 5
Badge +14

Hello @DanC 

You can run DDB Verification against cloud storage, it just cannot be done with Archive cloud storage and we recommend using a VM hosted in the cloud in the same region as the bucket we are reading from to avoid Egress charges.

Regarding the recopy, the Job ID never changes. In all scenarios, regardless of WORM, no matter how many copies the job is aux’d to, the Job ID never changes. Yes the Aux Copy JobID will obviously change, but the JobID of the backups will never change across copies.

When you mark a job for re-copy we prune the blocks and data block references in the DDB. Since the storage is WORM, this poses a problem because we cannot actually prune the data since it is held by the vendor’s WORM Lock. If marked for re-copy, we will re-copy the job and re-write the data with different Chunk\sFile IDs. The old data will remain until everything prunes off.

 

Thank you,

Collin

Userlevel 3
Badge +13

thanks @Collin Harper 

Reply