Hello,
Could you say please, if I have Cloud storage in Azure, it is accessed by local MA with local DDB.
Are there any optimization's during direct restore from cloud? I mean, if you have 10 TB application , that consumes 3 TB of cloud storage due to deduplication optimization's, how big amount of data will be read during restore?
Thanks!
Recovery from cloud Storage, read optimization's

Best answer by Albert Williams
Hello
Thanks for the great question!
First rule of Commvault to remember is that the DDB is not used at all for restore, only backup. All you need to restore data is the Commserv and the library. This means if you have a server in the cloud and you give it access to read it will be able to perform the restore.
When it comes to recovering Deduplicated data you need to look at the application size and that is the amount of data in worst case that will be read. When you see data written being much lower than application size a good way to think about it is that its data is already written under a different job. So its 10TB of app size is in the cloud but only 3TB was written in this job, the other 7 was written in other jobs.
This is a very simple way to think about it and there are other factors that will come into play like compression and such but when planning a restore you want to think about worst case and if the app size is 10TB that is the amount of data that will be read/written on the restore. It has to come from somewhere :D
Hope this answers the question and helps you get the restore to run faster!
Kind regards
Albert Williams
Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.