Solved

S3 with WORM and extended retention

  • 7 December 2022
  • 4 replies
  • 532 views

Badge +5
  • Commvault Certified Expert
  • 17 replies

Situation:

primary Backup on site A onto S3 with deduplication, retention of 30 days / 4 cycles, no WORM

Backup copy (synchronous) to another site onto S3 with deduplication, retention of 30 days / 4 cycles, extended retention for 365 days (monthly fulls) and 10 years (yearly full), WORM

 

Question:

if we enable (object-level) WORM with the Workflow on the Backup Copy storage pool then by default the WORM Lock period is twice the retention, meaning in the above example WORM Lock would be 60 days (2x 30 days). However, how does that affect the extended retention for the monthly / yearly fulls? To what value would the WORM lock have to be set to guarantee that the monthly and yearly fulls are WORM protected for 365 days / 10 years, but all the other backups should follow the “normal” WORM lock of 30 days?

If we set the WORM Lock to 10 years (how long the yearly fulls need to be WORM protected) then all backups that get copied would get that 10 year lock, even all the incrementals that get copied throughout the week and only need 30 days retention.

icon

Best answer by Mike Struening RETIRED 8 December 2022, 20:47

View original

4 replies

Userlevel 7
Badge +23

@ChrisK , it’s not advised to use Extended retention on WORM cloud backups.  Essentially, we cannot do any micro pruning with cloud worm. Instead, we need to seal the dedupe store and macro prune at regular intervals as the only method of pruning data (and mixed/Extended Retention complicate that to say the least).

With that said, I would set a single overall retention to meet your business requirements.

 

Badge +5

@Mike Struening  Thank you for the feedback. I see the issue with using extended retention on WORM, I’m not a fan of it either..

Do you know if Commvault shows the Usage on the S3 Cloud Library based on the amount of data of all active jobs or the actual used space on the S3 Bucket itselt? Because from my understanding once a job is aged / has reached the retention, it gets flagged but can’t be deleted/pruned on the bucket itself yet as WORM Lock is still active for x days. Meaning there is potentially more data on the Bucket than Commvault “knows about”. So from my understanding it would make sense that Commvault only counts the data amount that’s linked to actual active jobs.

Alternatively - and I’m just thinking out loud here - what would the behaviour be like if we used non-deduplicated copies to enable WORM on? Would the WORM lock still be twice the retention of the storage policy copy?

From the documentation (Link) it’s not clear how non-deduplicated cloud libraries are behaving when WORM is enabled there, it only mentions for ones that have dedup enabled.

Userlevel 7
Badge +23

@ChrisK, COMM vault knows how much data is physically present, as we keep track of whether or not any physical pruning has occurred or not (whether we're talking about WORM or not). We can report on the actual size, not just based on the active jobs.

The 2x storage policy retention rule doesn't apply for non-dedupe, as there is no DDB that needs to be sealed.  Aassuming you are using a cloud library that supports object-based retention lock (as opposed to only bucket / container level retention lock), we can prune the volumes as soon as that retention time is met. There is very little if any actual storage consumption foot print when using storage level worm without any dedupe.

The 2x storage policy retention logic comes into play due to our requirement to macro prune all data associated with a DDB. in a dedupe scenario, the following needs to take place:

  1. all jobs need to logically age (storage policy retention)

  2. the last job(s) associated with that DDB need to satisfy the object level lock on storage

 

With non-dedupe, the data associated with each job can be pruned as soon as the object lock retention requirement is met.

Let me know if that helps!

Badge +5

perfect, thank you @Mike Struening !

Reply