Skip to main content
Question

Possible to move/copy backup jobs from AWS S3 Deep Glacier bucket to S3 standard (hot) bucket?

  • 16 January 2024
  • 6 replies
  • 188 views

Forum|alt.badge.img+5

Afternoon folks,

We have backup jobs sent synchronously to a AWS S3 bucket which uses Deep Glacier as the storage class, we need to move the 6 months worth of backup jobs (for a particular storage policy) from this Glacier bucket to an AWS S3 standard (hot) bucket so that we have the data readily available - is this possible and what are my options please.

6 replies

Forum|alt.badge.img+14

Hello @KevinDodd

You have two options for this.

  1. You could recall the jobs using the Archive Recall. That moves the data from the Archive Tier to the Standard Tier so it can be read. In the workflow you would need to specify the retention time for the amount of time you desire to keep the data in the Hot Tier.https://documentation.commvault.com/2023e/expert/running_cloud_storage_archive_recall_workflow_on_demand.html
  2. You can setup a new Library with a Hot Tier bucket, do the above steps but instead of keeping the data for an extended period of time, then aux copy the data from one copy to another backed by the Hot Tier.

Thank you,
Collin

 


Forum|alt.badge.img+5
  • Author
  • 19 replies
  • January 16, 2024

Thanks Collin.

Option 2 is definitely what I'm trying to achieve. I’ve had a look at the Archive Recall workflow and if I’ve understood it correctly it appears that you can only enter a single job, for my requirement I need to restore 6 months worth of jobs so we’re talking thousands of jobs!

Is my only option to restore ALL the data from Deep Glacier? The jobs probably only make up 30% of the whole bucket.


Forum|alt.badge.img+6
  • Vaulter
  • 34 replies
  • January 17, 2024

Hello @KevinDodd,

Thank you for reaching out to us on this question.  I’m one of the Cloud experts here at CommVault and Collin asked me to review this query of yours.

You can use the Cloud Archive Recall Workflow to recall the data tied to the specific job (or jobs) you require, and then AuxCopy only that job to the new Hot Tier storage.  I will caution you that increasing the recall time on the workflow to allow the AuxCopy to complete will incur further costs from your AWS hosting service, as you will be charged for the time the recalled data is held temporarily in Hot Tier.  You can consult your AWS hosting service as to how much the charge is, and how it is applied.

To copy just the job (or jobs) you want stored on the Hot Tier library, you can create a selective copy and manually select the jobs to be copied:

https://documentation.commvault.com/2023e/expert/selective_copies.html

Let us know if you have any other questions.

Regards,

Josh P


Forum|alt.badge.img+5
  • Author
  • 19 replies
  • January 17, 2024
Josh Perkoff wrote:

Hello @KevinDodd,

Thank you for reaching out to us on this question.  I’m one of the Cloud experts here at CommVault and Collin asked me to review this query of yours.

You can use the Cloud Archive Recall Workflow to recall the data tied to the specific job (or jobs) you require, and then AuxCopy only that job to the new Hot Tier storage.  I will caution you that increasing the recall time on the workflow to allow the AuxCopy to complete will incur further costs from your AWS hosting service, as you will be charged for the time the recalled data is held temporarily in Hot Tier.  You can consult your AWS hosting service as to how much the charge is, and how it is applied.

To copy just the job (or jobs) you want stored on the Hot Tier library, you can create a selective copy and manually select the jobs to be copied:

https://documentation.commvault.com/2023e/expert/selective_copies.html

Let us know if you have any other questions.

Regards,

Josh P

Thanks for the update Josh. I raised a case with Commvault support on this one also (initially was told by support it was not possible), we worked out that there’s over 15,000 backup jobs that would need to be manually recalled and as you can only do 1 job at a time this isn’t feasible so looks like my only option is to pull ALL of the data (this bucket holds jobs for multiple storage policies but we only want jobs for a single storage policy) back from archive (might need to permanently move it Permanently Moving Data Between Amazon S3 Storage Tiers (commvault.com) although I hope this isn’t the case) and then setup a new job to aux copy the data to a Hot tier bucket


Forum|alt.badge.img+6
  • Vaulter
  • 34 replies
  • January 17, 2024

Hello @KevinDodd,

Unfortunately yes the workflow can only recall a single job at a time.  The permanent move option would be the next best option, however please be aware of the change of storage costs that the tier change would incur for you.  Again check with your AWS provider for how pricing will be handled for both the mass recall and the new storage tier.

Regards,

Josh P


Forum|alt.badge.img+5
  • Author
  • 19 replies
  • January 18, 2024
Josh Perkoff wrote:

Hello @KevinDodd,

Unfortunately yes the workflow can only recall a single job at a time.  The permanent move option would be the next best option, however please be aware of the change of storage costs that the tier change would incur for you.  Again check with your AWS provider for how pricing will be handled for both the mass recall and the new storage tier.

Regards,

Josh P

 

Hi @Josh Perkoff

We are trying to work out of costs of using the “CloudTestTool.exe” to move data from S3 Deep Glacier Archive bucket to S3 Hot bucket and then back to S3 Deep Glacier Archive bucket, specifically the amount of PUT/GET requests that will be made.

As an example, lets say we need to recall 10TB worth of data, move to S3 hot and then move back to S3 Glacier Deep Archive how do I workout the GET/PUT requests?

How is the data broken down when using the CloudTestTool.exe? Is it right that it is broken down into 512KB chunks so 10TB/512KB = 19,531,250 will be the amount of GET/PUT requests?

So for e.g …

Transfer 10TB to S3 standard bucket after recalling

10TB/512KB = 19,531,250 PUT requests

Transfer 10TB back to S3 Glacier Deep Archive

10TB/512KB = 19,531,250 PUT requests

 

Total PUT requests = 39,062,500

 


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings