We are running on Commvault 11.20.
Currently backup jobs are using Azure Blob cloud disk library, with container default setting. (On Azure is container type is Cool).
We would like to move this storage to another tenant with different storage account with Cool/Archieve type container. Looking for a best approach to migrate the storage, if it can be done from Commvault and not from Azure.
Best answer by NiallView original
Tested a backup on primary copy, it was successfull. However, Aux copy is failing with
Error Code: [13:138] Description: Error occurred while processing chunk [xxxxx] in media [xxxxxx], at the time of error in library [xxxx] and mount path [[xxxx.x.xx.com] xx], for storage policy [xxxxxxx] copy [xxxx] MediaAgent [xxxx]: Backup Job [xxxxx]. Undefined software error occurred. Source: xxxxx.xx.xxxxx.xx, Process: CVJobReplicatorODS
In logs I see
Failed to read the remote file This operation is not permitted on an archived blob.
After you have created your new Cloud Library with Archive/Cool storage you will need to create a Storage Pool/DDB to use it and then you can create new Storage Policy Copies and pick data to AUX, or DASH, copy into it.
Configure your new Azure Storage Account (Tenant B ) as cool storage. In Commvault you need to configure a new Cloud Library on the new (Tenant B ) Azure Storage Account. When you configure the new Cloud Library in Commvault you define it as Cool/Archive. This way although you have set it to cool in Azure, when Commvault uses this Cloud Library it will tell Azure to put the data in Archive and the Metadata in cool.
Does that clarify it for you?
We donnot want to use existing storage account in tenant A (existing one). We would like to use storage account from Tenant B ( new ).
if I have to leave existing storage to age as per retention. On the new storage (tenant B), Do I have to create a new cloud disk library with storage class as Cold or Archieve or Cold/Archieve. Is one Cloud library enough or should I create one for Cold and One for Archieve.
This means to migrate from a Cool Tier to an Archive Tier, you will need to create a new cloud library within Commvault (When you do this, you configure it to use the Archive Tier of storage) with a new (Global) deduplication database pointing to it. After these are created you can create new copies of your storage policies pointing to the new (archive) cloud library and DASH copy the data over. Once complete you can then delete the old copies.
It is actually quite straight forward but depending on your retention you may chose to allow the old jobs to age out naturally rather than copying them over (Cost of rehydration versus savings made in using Archive storage).
And I would recommend that you review the version of indexing that is being used by the clients. Anything that is not using V2 Indexing, I would migrate to V2 indexing first as this will ensure any restores will use a workflow to recall the data, rather than you having to recall the data manually first.