But I would like to know what all consideration to be made before I execute the above workflow.
Also I would like to know once the workflow is executed on any existing storage pool, is that mean the data which is already available on the storage pool is immutable or any new data written to it is immutable?
Any suggestions?
Regards
Ananth
Best answer by Ananth
Hello All,
I had a discussion with solution architect and go to know the below steps:
It’s better we always enable the WORM on new storage pool rather than on existing storage pool ,reason being existing Storage pool will have mixed retentions and more references on DDB end.
We create a new bucket with object lock enabled on it and we need to make the default retention is disabled when we enable the object level.
Workflow will automate the retention on bucket based on our storage policy copy retention.
We should not mix multiple retention copies to same storage pool which means we should have dedicated bucket for each retention values.
We should also make a note that 3X times of data will be retained on cloud when we enable the WORM mode.
I’m not an expert but if something is happening in the cloud space, its not something the Media Agent will manage since there are no MA processes running on S3 etc. Someone please correct me if I’m wrong here...
Have you seen this section, few points to read thru and determine what is covered and what is not
@Ananth , enabling WORM on the S3 puts it into compliance mode, which means nothing can be deleted until the retention period is over.
This will prevent any changes to the files, etc. but keep in mind that once you set the option, there’s no going back.
For anyone following along, here’s the section regarding S3:
Before You Begin
Create or configure the following in cloud storage:
Vendor
What Does the Workflow do?
Before You Begin Tasks
Amazon S3
The workflow enables Compliance mode on the bucket.
Create a Bucket in Amazon S3 cloud storage, with Object Lock enabled.
Verify that the PutBucketObjectLockConfiguration permission is assigned to the bucket along with the other permissions needed to configure Amazon S3. For more information about the other permissions, see the Amazon S3 pages.
For Cloud Storage you will need to create a new Bucket. This is because We need to work with the Cloud Architecture to make the data immutable.
At the Cloud Level the Object lock flag makes sure that no changes can occur to the files once written until the retention is met.
Hand in hand with that, we logically flag the Library so that we flag the associated Storage Policy Copies for WORM, so that the jobs cannot be prematurely aged/deleted by manual intervention.
Also the Retention time can only be increased, but never decreased.
Finally Micro-pruning is disabled. So If Using Deduplication you will need to periodically Seal the DDB or the data footprint will continue to bloat.
You could always just enable WORM on the Individual Policy Copies, but this is only a logical flag on the Commvault side. It will prevent someone from manually deleting jobs, or reducing the Copy’s Retention.
However it does little to protect from external manipulation of the files on the Cloud.
Great Thanks for the response, so when we create a S3 bucket i should enable it with object lock , do I need to set retention on bucket level? or workflow will set the retention for the object which we write to it?
For example, If my retention in storage pool is 30 days, so when i create S3 bucket with object lock do i need to set any retention values there or just enable the object lock and execute the workflow to take care of object lock retentions?
Yes, I’ll paraphrase, or rephrase what’s been written here..
S3 -level protection : ask your Amazon S3 administrator to enable Object lock with proper retention. This is a way to make sure no 3rd-part (or ransomware tool) can change (=alter) any data written before the retention is met. This is some part of WORM that you might be looking for.
Commvault protection : enable WORM = Noone through Commvault product or its CLI APIs would be able to delete/age/alter your backups before their retention is met. If ransomware is able to ‘talk’ to Commvault and ask to delete every backup, everything, everywhere, then with WORM it would not be possible.
Consider this as an application-level protection.
And if you have both, then you should be almost safe from crypto/deletion attacks, from the time I write this. (Yes, innovation goes very fast nowadays… )
I had a discussion with solution architect and go to know the below steps:
It’s better we always enable the WORM on new storage pool rather than on existing storage pool ,reason being existing Storage pool will have mixed retentions and more references on DDB end.
We create a new bucket with object lock enabled on it and we need to make the default retention is disabled when we enable the object level.
Workflow will automate the retention on bucket based on our storage policy copy retention.
We should not mix multiple retention copies to same storage pool which means we should have dedicated bucket for each retention values.
We should also make a note that 3X times of data will be retained on cloud when we enable the WORM mode.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.