Hello Commvault Community!,
I have a question in the name of one of our client.
We have created a Cloud Library (Azure Archive) - and copied around 40TB of data into the Cloud.
It tooks us over 2 months to transfer this amount of data, when it completes we then realize that there is a problem with Cloud Recall workflow - when we try to “Browse and Restore” from CopyPrecedense (Azure Archive) - It is trying to reach an Index from this archive cloud (it runs a job “Index Restore” and it can’t find index data because it is archive storage, so it runs a Archive Recall workflow to recall an index data to be able to list the backup data - this worlflow fails after few seconds and we can find an error in Browse and restore window: “The index cannot be accessed. Try again later. If the issue persists, contact support.”.
We decided that restoring an index from archive cloud isn’t good idea, because even if it would work - it would take too much time (few hours just to list backup content (index restore), and few hours for actual data recall).
We decided to use Combined Storage (Archive/Cool) which should keep an Index and Metadata on cool tier (which is dynamically accessible) and actual data on archive tier.
And here is my question - is it possible to somehow convert to Archive/Cool (Combined Storage) this 40TB of data which is currently entirely in Archive tier?
If I make an Copy for this Storage Policy and Specify Source copy to ArchiveCloud - will it try to recall entire 40TB of data from Azure? or maybe azure will let commvault convert this data and decide which blocks are the index/metadata?
I would be greatfull for any tips and tricks :)
Thanks in advance and have a nice day!
Regards,
Mateusz