Question

AWS S3 Object Lock

  • 24 April 2024
  • 4 replies
  • 25 views

Userlevel 2
Badge +6

The process of setting retention to half what is actually required, combined with the periodic sealing of the DDB seems clunky, and results in significantly higher storage usage - can we expect any improvements to that process?

Where an existing block is referenced during an operation, could it be more efficient to just extend the Object Lock retention for the associated S3 Object?


4 replies

Userlevel 2
Badge +4

I completely agree and am facing the same issue now.  I have some data I need to retain for 7 years which means it will actually be sitting in my OL bucket for 14 years,

However I think this is more a function of S3 OL requiring versioning, rather than an issue Commvault can do much about.

Userlevel 6
Badge +18

You cannot age data which is immutable (locked).  When you unlock any of the data, it’s no longer immutable and is susceptible to malware, accidental deletions, etc.  With dedupe, unlocking any portion of the data will compromise the entire data set.  (The first/oldest jobs are actually the most critical since it’s the baseline data.)  Thus why if you desire to have 7 years of data immutable, you need to seal the DDB and have a fresh 7 years under lock before you unlock any part of the first 7 years.  This is why the requirement for 2x storage -- when using third party storage.

Thanks,
Scott
 

Userlevel 2
Badge +6

To my question, S3 Object Lock seemingly supports extending the retention of objects - so why couldn’t baseline objects have their retention extended each time they’re referenced, as part of the backup operation?

Userlevel 6
Badge +18

I would need the smart folks to explain why we may not be using that behavior.  My best guess is it would be a pretty intensive process to touch all the objects every time.

Thanks,
Scott
 

Reply