Solved

Azure blob space library with immutable policy


Badge +5

Hello,

we have created a test Azure blob space library which should be used for deduplicated secondary copy. There is an immutable policy set on the container. According to Commvault documentation we set the container retention twice the value that has the storage policy copy retention, then in the DDB properties “create new DDB  every [] days” to value of storage policy copy retention. During the backup cycles, there remain sealed DDB’s, which don’t reference any job (all expired). Then someday they are indefinitively automatically removed (and then their baseline from cloud storage space). These baselines in the cloud consume very lot space (and costs). There are 3 to 4 baselines in the cloud during the backup cycles.

Please has somebody experience with cloud library deduplication (with immutable blobs) ?  Is really more then 3 times more space necessary for the backup ? Which process in Commvault decides, when the sealed DDB will be removed ?

After test we would like to give a realistic proposal to our customer. But we cannot predict the costs now.

Thank you

Jiri.

icon

Best answer by Prasad Nara 5 March 2021, 19:20

View original

25 replies

Userlevel 7
Badge +23

Hey @JNIZ , for the sealed DDBs, are you sure there are no jobs referencing the DDB files?  Before we go too deep, I want to be sure (and see how you are checking).

To confirm, are you setting the immutable policy to twice your retention (and sealing equal to the retention)?  this should tell Azure not to allow deletion until the last job before the seal every X) is X days old which is double X.

If nothing is pruning off the cloud, my initial thought is that there are still jobs holding onto those sealed stores.

What are the create dates of each store, and what is the oldest job in any copy using these stores?

For anyone looking for more details, here’s our documentation.

Badge +5

Hi Mike,

thanks for your quick answer. I will prepare the informations and post it.

 

Thanks

Jiri.

Userlevel 4
Badge +6

I hope you are using this workflow. We compute the immutable days based on storage policy copy retention. In latest version of the workflow, we use 180 days as default DDB seal frequency. if half of the copy retention is higher than 180 days then that will be used as seal frequency.

Immutable days = Copy Retention + DDB Seal Frequency

Examples:

Case 1:

  • Copy Retention :  60 days
  • DDB Seal Frequency : 180 days
  • Immutable days =  60 + 180  = 240 days

Case 2:

  • Copy Retention :  2 years
  • DDB Seal Frequency : 1 year
  • Immutable days =  2 + 1  = 3 years

Here default 180 days for seal frequency is an ideal configuration. if needed you can change this value from workflow configuration parameters based on any specific need to do seal DDB more aggressively.

 

Non-dedupe is best fit for immutability but that will need lot of storage based on copy retention. Commvault is providing dedupe capability with immutable storage to reduce the storage usage to 2 to 3 x.

 

Please try following above procedure. if you still see more than 3 baselines then please customer support, we will help you to check it further.

 

 

Badge +5

Hello,

Thank you very much. I didn’t run the workflow (I didn’t find it in Commvault store), but I set the parameters manually (in Commvault and cloud) according to this workflow documentation: SP retention, disable micropruning, seal DDB every [retention], container lock to twice [retention] (in Azure portal). Mike, the not expiring DDB was my fault, there was an Index server backup job. Prasad, you mentioned different retention, seal and lock values, but in the workflow last documentation (V11.22] are still the values mentioned by Mike above. I will it test in the future. But my actual problem is: The customer wants to place about 150TB secondary copy data to cloud (to replace tape libraries). In my very short test cycles (we have not time enough to test it for a longer time) I have 2 to 3 baselines in the cloud. But 2 or 3 baselines is a big difference in future space consumption (150TB). We have to tell the customer exact behavior after container lock expires. So after 3rd DDB seal, the container lock on 1. cycle expires and the blobs are released. Will be the the data of 1. cycle released immediately after 3rd DDB seal or after some delay ? We have to guarantie, that there will not be more then 2 baselines (+dedup data) in the cloud.

 

Thank you

Jiri.

Badge +5

A notice yet, we access the storage account via Access key. Is it OK ?

Thanks

Jiri.

Userlevel 7
Badge +23

Hello,

Thank you very much. I didn’t run the workflow (I didn’t find it in Commvault store), but I set the parameters manually (in Commvault and cloud) according to this workflow documentation: SP retention, disable micropruning, seal DDB every [retention], container lock to twice [retention] (in Azure portal). Mike, the not expiring DDB was my fault, there was an Index server backup job. Prasad, you mentioned different retention, seal and lock values, but in the workflow last documentation (V11.22] are still the values mentioned by Mike above. I will it test in the future. But my actual problem is: The customer wants to place about 150TB secondary copy data to cloud (to replace tape libraries). In my very short test cycles (we have not time enough to test it for a longer time) I have 2 to 3 baselines in the cloud. But 2 or 3 baselines is a big difference in future space consumption (150TB). We have to tell the customer exact behavior after container lock expires. So after 3rd DDB seal, the container lock on 1. cycle expires and the blobs are released. Will be the the data of 1. cycle released immediately after 3rd DDB seal or after some delay ? We have to guarantie, that there will not be more then 2 baselines (+dedup data) in the cloud.

 

Thank you

Jiri.

Hey Jiri, you should only have have 2 DDBs at a given time: the sealed one and the active one…..ASSUMING all jobs run and age as expected.

Is that one job holding up the seal now aged off, or is it still active?

Badge +5

Hi Mike,

a had to delete the job. There is sometimes a problem with expiration of Index server backups. I read the documentation and there should be al least 3 backups in the copy before the oldest one expires.  I had three in the primary copy, but 2 of them were size of 0. These zero backups were not copied to the secondary copy(cloud), so there was only one and this one didn’t expire. After deleting the job, the sealed DDB and the baseline were successfully automatically removed.

For now Its very important for us to know there should be only 2 DDB’s/baselines in the storage. 

Thank you both for helpful informations during my first cloud experiences. I hope the next tests will help me to be more familiar with this technology.

Last question - have you access to the WORM workflow in the store ? I could not find it.

Thanks a lot

Jiri.

 

 

 

 

Userlevel 4
Badge +6

You can find it here.

https://cloud.commvault.com/webconsole/softwarestore/store.do#!/136/0/17417

Badge +5

Thank you Prasad,

I have downloaded it. Its not visible for for me in the store. Maybe it requires another type of login.

 

Jiri.

Userlevel 7
Badge +23

Thank you Prasad,

I have downloaded it. Its not visible for for me in the store. Maybe it requires another type of login.

 

Jiri.

@JNIZ what do you see on that page?  I can see that is it Free, so it should show up for you.  If you only just logged in, click the link a second time and see if the Cloud redirects you or search for it with the details in my screenshot:

 

Badge +5

Hi Mike,

I login in to store and search the pattern “Enable” in category “all”:

I don’t see the workload.

Thanks

Jiri.

Userlevel 7
Badge +23

I’ll talk to the Cloud folks and see why.

Be in touch!

Userlevel 7
Badge +23

@JNIZ , already heard back from our Cloud team and they said you are already configured properly.

Can you try this link?

https://cloud.commvault.com/webconsole/downloadcenter/packageDetails.do?packageId=17417&status=0&type=details

Badge +5

Yes I see the link properly, I can download the workflow. The link from Prasad see above works as well. But if I open the store, it cannot be found.

Jiri.

Userlevel 7
Badge +23

@JNIZ can you clarify a bit?  I’m not seeing it if I search on the store (same search string as you’re trying) but the direct link works.

Do you have the workflow now, but are concerned why the search won’t find it?  I’m going to bring up the search issue to my contacts internally.

Want to be sure otherwise you are good.

Userlevel 7
Badge +23

@JNIZ , we’re going to look at the search issue, though I wanted to share that searching for “Worm” at the top level does find the Workflow:

 

Userlevel 4
Badge +10

I see the workflow has no Category assigned which may be why the search did not work properly. If you search from the “Home” section it finds it in “Workflows” but after clicking on “view all” in Workflows it’s empty. So something with the search algorithm.

 

Userlevel 7
Badge +23

@Gseibak you are 100% right!  Dev just fixed it so it works fine now :-)

@JNIZ , let me know if you’re good on this issue; feel free to mark a reply as the Best Answer!

Badge +5

Great ! Its working now.

Thanks Jiri.

Userlevel 7
Badge +19

@Prasad Nara I would really like to see this being implemented build-into plan definitions. You know which cloud storage the customer is picking, so you know which ones support WORM. If the selected storage supports it, then the customers sees the enable WORM becoming present in the plan definition screen. The DDB seal configuration is upon ticking and saving the configuration applied in the background automatically.

Same thing I would also like to see coming back when configuring long term retention. If the customer selects a retention period longer than X days it starts to suggest if Commvault should use the cheaper storage tiers and even suggest disabling/not using deduplication in case of deep archive. 

 

Userlevel 4
Badge +6

@Prasad Nara I would really like to see this being implemented build-into plan definitions. You know which cloud storage the customer is picking, so you know which ones support WORM. If the selected storage supports it, then the customers sees the enable WORM becoming present in the plan definition screen. The DDB seal configuration is upon ticking and saving the configuration applied in the background automatically.

Same thing I would also like to see coming back when configuring long term retention. If the customer selects a retention period longer than X days it starts to suggest if Commvault should use the cheaper storage tiers and even suggest disabling/not using deduplication in case of deep archive. 

 

It’s in our roadmap.

Userlevel 5
Badge +8

@Onno van den Berg feel free to reach me if you like to discuss this roadmap further - you know where to find me :)

Userlevel 7
Badge +23

@JNIZ , I split this issue into another thread for you:

 

Userlevel 4
Badge +13

I hope you are using this workflow. We compute the immutable days based on storage policy copy retention. In latest version of the workflow, we use 180 days as default DDB seal frequency. if half of the copy retention is higher than 180 days then that will be used as seal frequency.

Immutable days = Copy Retention + DDB Seal Frequency

Examples:

Case 1:

  • Copy Retention :  60 days
  • DDB Seal Frequency : 180 days
  • Immutable days =  60 + 180  = 240 days

Case 2:

  • Copy Retention :  2 years
  • DDB Seal Frequency : 1 year
  • Immutable days =  2 + 1  = 3 years

Here default 180 days for seal frequency is an ideal configuration. if needed you can change this value from workflow configuration parameters based on any specific need to do seal DDB more aggressively.

 

Non-dedupe is best fit for immutability but that will need lot of storage based on copy retention. Commvault is providing dedupe capability with immutable storage to reduce the storage usage to 2 to 3 x.

 

Please try following above procedure. if you still see more than 3 baselines then please customer support, we will help you to check it further.

 

 

@Prasad Nara 

If all the steps is outlined in the documentation fo rthe workflow I guess it’s as good to do those manually. Or is there some hidden magic in the workflow?

//Henke

Userlevel 4
Badge +13

Also I think the documentation isn’t stating the same as you state above @Prasad Nara.

//Henke

Reply