We’ve been using disk libraries on our offsite DR mediaagent for years now. We only store the last week of changed data in it in case of absolute disaster (we store a year’s worth of changes on our on-prem mediaagent).
Our office just got a large amount of S3-compatible storage and gave us a path to it. I successfully added a new “S3-compatible” cloud library to my commserve and I can see it (a single mount path, vs the 50 mount paths my disk library has, going to each disk).
My goal is to change my existing Storage Policy settings, the policy called “SSCC Auxilary Copy” so that it stops using the disk library, and starts using the Cloud library for all future Aux copies.
I went into the properties of the Storage Policy and I see the tab “Data Paths” that only contains a single item, with the correct MediaAgent02 listed, and the current disk library, just also named “MediaAgent02” - I think this is what has to be changed. My new cloud library is named “DoIT-S3-Storage”.
But when I click on “add” in this tab, I get the following error. When I click the URL pictured, my browser opens but just directs me to the home page of the commvault essentials site - https://documentation.commvault.com/2022e/essential/index.html - I am already logged into the documentation site anyway.
Is there documentation already written to do what I’m describing? I’m not sure how to proceed next.
I believe you’re going about this the wrong way. Typically what you will do is add a new Copy (S3) and disable the old Copy (Disk). This will start new jobs copying onto S3 and you can let the data on the Disk copy hang around until the data expires. You can optionally aux copy the data from the Disk copy to the S3 copy so you can retire the Disk copy immediately.
In your Storage Policy, I assume you have 2 Copies? Primary and Secondary (Disk)?
Thanks Scott. That did work out. It was our first time since initial setup in 2018 that we are making storage policy changes like this.
Can I get your thoughts on whether or not this new S3 Aux copy, which we want to cover 100% of our backed up data, going back just a week as mentioned in the OP, will be able to do this? We generally just do 3x incrementals per day, 6 days a week, per subclient/client, and then roll those incrementals into synthetic fulls on Saturday.
However, I am guessing that this means that the new S3 Aux copy will not have everything. While I’m not looking forward to it, I’m guessing I will need to run a “Full” type backup on every client, so that an Aux copy of these Full backups get put on the S3?