I completely agree and am facing the same issue now. I have some data I need to retain for 7 years which means it will actually be sitting in my OL bucket for 14 years,
However I think this is more a function of S3 OL requiring versioning, rather than an issue Commvault can do much about.
You cannot age data which is immutable (locked). When you unlock any of the data, it’s no longer immutable and is susceptible to malware, accidental deletions, etc. With dedupe, unlocking any portion of the data will compromise the entire data set. (The first/oldest jobs are actually the most critical since it’s the baseline data.) Thus why if you desire to have 7 years of data immutable, you need to seal the DDB and have a fresh 7 years under lock before you unlock any part of the first 7 years. This is why the requirement for 2x storage -- when using third party storage.
Thanks,
Scott
To my question, S3 Object Lock seemingly supports extending the retention of objects - so why couldn’t baseline objects have their retention extended each time they’re referenced, as part of the backup operation?
I would need the smart folks to explain why we may not be using that behavior. My best guess is it would be a pretty intensive process to touch all the objects every time.
Thanks,
Scott