Question

11.32, S3 Object Lock and the Architecture Guide.

  • 27 February 2024
  • 7 replies
  • 167 views

Userlevel 2
Badge +4

Hello.

 

I’ve been doing a lot of reading, I’ve seen the threads on here and I’ve read the architecture guide but I am still a little puzzled as to what I actually need to configure.

I would like to apply WORM in the Commcell and object-lock in S3IA.

I believe objects will be aged from the bucket when the DDB seals periodically.  This happens at the same period as the policy copy retention

I still don’t quite understand why bucket object retention has to be set to twice the policy copy retention.

I also can’t work out if I need to add a lifecycle config rule and set the  non current version expiration to 2 days.  I know this needs to be done for none object-locked buckets that don’t have versioning enabled, but an object-locked bucket does.

 

 

Does anyone else use worm and ol, what are your configs?  Do you see issues where the size that Commvault reports is not the same as the actual bucket size due to objects not expiring?

 

Thanks,


7 replies

Userlevel 5
Badge +12

Hello @Yuggyuy 

Thanks for the great question!
The reason we need the object retention needs to be twice the length of retention is the following: 

Before a DDB is sealed we will assume the entire thing is active and being referenced everyday ( this is not true but in the case of security and worst case scenario we assume this )

So if you have a 30 days retention and you are sealing the DDB every 30 days, the backup that is written on day 30 right before the seal will need to be kept for 30 more days. 

This means the first block that was written is going to be 60 days old when it is ready to be removed and this is where the double comes from. The key reason for this is the job that was protected on day 30 could reference a block that was written on day 1 if the data is shared ( very likely if there is a good dedup rate ). You don't want the data having its worm Lock removed on day 31 when it is referenced by a job still needing to be held for another 29 days. 

In regards to your other question: 

I also can’t work out if I need to add a lifecycle config rule and set the  non current version expiration to 2 days.  I know this needs to be done for none object-locked buckets that don’t have versioning enabled, but an object-locked bucket does.

I'm not sure how to answer this as i think you are talking about life cycle config inside of the cloud vendor and not a CV configuration, please confirm and provide further details.

Kind regards

Albert Williams

Userlevel 2
Badge +4

Hi Albert, thanks for the reply, that makes sense for the DDB.  

The lifecycle config comes from the CV architecture guide, which describes adding an expiration rule for none current objects to S3 buckets with versioning.  Object locked buckets do so I am trying to work out if we still need the rule for these.  

 

The guide only says this:

 

“Commvault does not support Amazon S3 versioning, define an Amazon S3 lifecycle policy to
DeleteObjectVersions if versioning is enabled.”

 

 

Userlevel 2
Badge +4

Looking at this thread 

it seems to indicate that we shouldn’t use amazon lifecycle config, and that Commvault will delete the data once the DDB is sealed.

However if object lock retention is set to twice that of the sp, presumably only a delete marker is set, the objects won’t actually be deleted until object lock retention has passed.

Userlevel 5
Badge +12

 Hello @Yuggyuy 

My understanding is that the Object lock’s goal is to protect against Ransomware and accidental deletion. Commvault is not going to submit data to be deleted from a DDB untill twice your retention as advised in my explanation above so to make sure the data is not venerable by something outside of CV before we submit the delete you should try to keep the lock on it and not expire early.  

Please advise if all your questions have been answered on this thread or if i have missed one. 

 

Kind regards

Albert Williams

Userlevel 2
Badge +4

Hello Albert.

Thanks for your help so far.

What I am trying to ascertain is wether, for object-locked buckets with WORM in the Commcell, I need to use Amazon’s lifecycle configuration to delete none-current versions, or if Commvault will delete all the versions of an object when the ddb is sealed.

The documentation isn’t clear on the above, and what I see in my current OL and WORMed buckets, where the data size in the Commcell is much less than the actual size in the bucket would suggest I do need an AWS lifecycle rule.

Userlevel 6
Badge +15

Hi Albert. 

Object locking is performed by the cloud provider, and the lifecycle configuration is, from my understanding, to keep some data in the bucket after if has been requested (_and allowed_ ) to be deleted (by Commvault, in our case). Just like recycle bin.

 

So, considering Commvault only manages the current version of a bucket, it’s not able to view or delete the previous versions of each objects in the buckets. => If you activate this (but for what purpose?) then this would be managed by your AWS lifecycle policy. 

Regards,

 

Laurent.

 

 

Userlevel 2
Badge +4

Just as a note, object versioning is set by default in an object locked bucket and can’t be disabled.

 

https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html

Reply