S3 Object Locking and Versioning

  • 3 January 2023
  • 5 replies

Badge +1

There is a related topic regarding my question but I don’t feel it was adequately answered: 


Essentially there is a conflict in documentation regarding versioning and how object locking works. Firstly the public cloud architecture guide for AWS states that bucket and object versioning is not supported in Commvault (p. 85 of AWS Cloud Architecture Guide - Feature Release 11.25 (

However, when enabling object locking in AWS, versioning is enabled by default on the bucket and objects - it is by design. Under ‘Enabling S3 Object Lock’ section ( it states that versioning is enabled.

So if Commvault doesn’t support versioning (and will result in orphaned objects), how can Commvault support object locking, which also enables versioning?

We have also tested this in the lab and can confirm when object locking is enabled, we do not see any data prune from the cloud library. We have run the workflow, stores have sealed and as far as Commvault is concerned, data is gone. However, the cloud library grows and grows.


Best answer by Josh Perkoff 3 January 2023, 17:03

View original

5 replies

Userlevel 2
Badge +6

Hello @ScottN 


Thank you for reaching out on this question.


The document referenced is for a version of the product that is End of Life.  As of 11.28 and beyond we support deletion of versioned objects for WORM storage as outlined in the Data Aging and Pruning section of this document (see the Results header):


For any release pre 11.28 you would have to set up a life-cycle rule manually to delete the versioned objects.


Let me know if you have any other questions.





Badge +1

Thanks for the info - we were drawing conclusions around the lifecycle policy in S3 as that would be our only option. It’s disappointing the lifecycle rule requirement didn’t make its way into documentation prior to FR28 (I certainly couldn’t find anything) as it’s a significant issue if data cannot prune.

I also can’t find your reference to the fact that versioned objects are now deleted in the link you mentioned. A complete copy and paste is below. Am I missing something?

Finally, will Commvault properly clean up versions if versioning is enabled on the bucket but we don’t also use the WORM workflow / object locking? Just curious as some customers enable versioning by ‘accident’.


  • Data aging and pruning (both the object level, and bucket / container level) will be performed as follows:

    • For deduplicated data, the data will be pruned from the cloud when the DDB is sealed and all the jobs in the DDB are aged. By this time, the WORM retention time in the cloud vendor side will expire, so the deletion will be allowed.

      For non-deduplicated data, the data will be pruned from the cloud when the job is aged

Userlevel 2
Badge +6

Hello @ScottN 


Sorry the information about versioning support is above that section.  What I was intending to inform you is how we would go about deleting the versions.


If the data is Deduplicated, the versions will not be deleted until the entire DDB Engine has sealed and is deleted.


If the data is non-Deduplicated, the version is deleted once all reference headers are also pruned with normal data aging.


If you would like greater clarity in this documentation regarding versioning, please feel free to submit a Documentation Feedback request referencing this conversation.


As for versioning without WORM, by default no CommVault will not clean up versions without WORM pruning or a DDB Macro-prune being triggered.  If the data is non-deduplicated, a macro-prune cannot be forced and thus a life cycle rule would need to be implemented.  If the data is deduplicated, sealing the DDB will force a macro-prune once all jobs age.



Badge +1

Thanks Josh, that’s cleared up the issue for us. Much appreciated.

Userlevel 2
Badge +6

Hello Scott, Happy to assist!