Skip to main content
Solved

Moving the backups from disk library to cloud library regularly

  • 16 March 2022
  • 1 reply
  • 944 views

Hello Everyone,

We have a huge fileserver and currently we are backingup to AWS S3 and this is becoming tough for us to backup within the window. We tried to use storage accelerator, but that didn’t work as expected.

We wanted to have a local copy first and then move ( not copy)  it to cloud as data is huge. How can we achieve this on a daily basis - take a backup locally and then move to cloud.

Please advise.

If you have a question or comment, please create a topic

1 reply

Userlevel 7
Badge +23

Hey @sai,

Quite easily actually if starting from scratch, but a bit of jiggling around if you have something already in place - especially since I don’t know your environment. I’m not sure if you are using the CommCell console or command center - assuming the former.

Ultimately, in your storage policy, you need the primary copy to be your disk based copy - primary is where the data gets stored in the first instance. Then, you need to create a secondary copy with the AWS library specified as the target. To copy the job from disk to cloud, you would run an auxiliary copy job - that is the type of job that moves data from one copy to another.

On the primary copy, you can either set very low retention, or set it up as a spool copy. Spool copies means the jobs exist there only until they are copied somewhere else. I’d prefer at least something like 7 day retention so you can take better advantage of deduplication.

Two ways to achieve what you want in your existing configuration

  1. Option #1: Add a new disk copy to the existing storage policy. right click on the new copy and promote it to primary - WARNING: any server pointing to this storage policy will be affected and backup to disk first.
  2. Option #2: Create a new storage policy. Set the primary copy as your disk target, create a secondary copy as AWS (synchronous copy makes a copy of all jobs, fulls or incremental, selective copy copies full jobs only with specific criteria). Downside of this approach is extra storage usage since its a completely new/separate copy from the previous.

If you are not comfortable with any of this (especially the deduplication settings) it may be best to call in to support to assist wherever you get stuck.

If you start falling behind on your copies there is a little-known feature that allows you to start copying a job to the secondary copy even while its still running. That way you don’t have to wait for the job to complete to disk first before data starts replicating to AWS. I can’t remember exactly where (maybe on the properties of the secondary copy or auxcopy job) that is but can look it up.