Skip to main content

Hi all,

one of our customers (running CV11.24.43) is interested in configuring a Selective Copy to Amazon S3 Standard-IA/Deep Archive (Combined Storage Tier). So far, no problem.

We intend to copy Monthly Fulls. Basic retention will be 365 days and Yearly Fulls will get extended retention set to 10 years.

Also, they want to lock the objects to prevent them from being deleted before the retention is met. The “Enable WORM Storage” workflow should take care of that.

But it does raise a few questions:

  1. Would you recommend using Deduplication in this scenario, or not?
  2. If we use Dedupe, I suppose a DDB seal will take place automatically ever 365 days, right?
  3. In this combined storage tier, metadata is written to Standard-IA, and actual backup data is stored in the Deep Archive tier, right? Do we set the object-level retention on both tiers?
  4. Retention of Index V2 backups does not follow the Storage Policy settings, and might be pruned earlier than the configured retention on the Storage Policy. What object-level retention does the software set on Index V2 backups if they are written to the same Storage Policy as the backup data?
  5. If we would write Index V2 backups to the Standard-IA tier, should we expect early-deletion fees because we’re only keeping the last (3, I believe) backups of each Index? Or is there any way to alter the Index V2 backup retention?

Thanks in advance for any answers!

In SP26 we will set the retention on each object, prior to that, it is relying on the bucket default retention set to 2x the Storage Policy retention.

If they would like to get these jobs to age quicker, they can set the Additional Setting to increase the frequency of backups:

Name : CHKPOINT_ENFORCE_DAYS Category: Indexing Type: Integer Value : Name : CHKPOINT_ENFORCE_DAYS Category: Indexing Type: Integer Value :

 


Hi @Orazan, thanks for your quick reply. And they do answer some of my questions.

  1. Thanks, we’ll take a decision based on our own preference then. :-)
  2. Thanks for confirming
  3. I think when we want to enable “WORM Storage” (to prevent accidental/deliberate early deletion of backup data), we should enable “Object Lock” on the bucket and then use the “Enable WORM Storage” workflow, right? So how will that behave for the Standard-IA tier?  Will it also lock the objects for the retention specified in the Storage Policy (365 days)? Or doesn’t it apply Object Lock on the Standard-IA tier (but then, how is our metadata protected against early deletion)? Or is “Configuring WORM Storage Mode on Cloud Storage” (as described here: https://documentation.commvault.com/11.24/expert/9251_configuring_worm_storage_mode_on_cloud_storage.html) not supported for combined tiers?
  4. I was referring to the Object Lock that’s applied to Index V2 Backups after “Configuring WORM Storage Mode on Cloud Storage” (as described here: https://documentation.commvault.com/11.24/expert/9251_configuring_worm_storage_mode_on_cloud_storage.html). I suppose if the Storage Policy retention is set to 365 days, all objects written will be locked for 365 days and we’ll get a lot of errors when the software tries to delete oldest Index backup after the 4th new one is written?
  5. Can you provide a link to this additional setting? And would that also solve the issue mentioned in question nr 4?

Thanks again!


Good morning.  I will attempt to answer your questions in order:

 

  1. Would you recommend using Deduplication in this scenario, or not?  This is up to you based on the business needs, we really can not give a recommendation.  You may save some on space but the periodic sealing and rebaselining will make that a mute point.  There is no micropruning for Glacier so sealing will not have the same advantages as with other cloud solutions.
  2. If we use Dedupe, I suppose a DDB seal will take place automatically ever 365 days, right? Yes in the scenario that you described.
  3. In this combined storage tier, metadata is written to Standard-IA, and actual backup data is stored in the Deep Archive tier, right? Do we set the object-level retention on both tiers?  You would not want to set Object lock on the Standard-IA as we would want to make changes there.
  4. Retention of Index V2 backups does not follow the Storage Policy settings, and might be pruned earlier than the configured retention on the Storage Policy. What object-level retention does the software set on Index V2 backups if they are written to the same Storage Policy as the backup data? We keep three copies of each Index backup.  When a fourth is written, the oldest is pruned.
  5. If we would write Index V2 backups to the Standard-IA tier, should we expect early-deletion fees because we’re only keeping the last (3, I believe) backups of each Index? Or is there any way to alter the Index V2 backup retention?  There is an additional setting that can be used to adjust this to a set block of time.

Reply