I am looking for information on how Commvault cleans up its target S3 bucket. Is there any documentation on how this works?
Thanks
Chuck
I am looking for information on how Commvault cleans up its target S3 bucket. Is there any documentation on how this works?
Thanks
Chuck
Best answer by Damian Andre
If you are really wanting to clear out old data in a bucket, you could set this option on your deduplication database:
Deduplication Database Properties - Settings
Do not Deduplicate against objects older than n day(s)
The number of days after which a unique data block cannot be used for deduplication during new data protection jobs. Setting this value ensures that very old data blocks are not allowed as the 'origin' data for newer data protection jobs that are deduplicated.
Important: If you set a value for less than 30 days, then the window will display the value but internally it will default to 365 days. For example, if you set the value to 29 days, then the window will display 29 days but data blocks that are as old as 365 days will be used for deduplication during new data protection jobs.
I don’t recommend this, but what it will do is stop referencing blocks older than the days you specify (30 is the minimum) for new jobs. Over time those new jobs will re-write those blocks and release the old ones to be pruneable. This wont work if you have long term retention because we can’t tell the older jobs to reference a newer block - but as the older jobs meet retention those older blocks can start being released.
But this is just cosmetic - the reason those older blocks hang around as Jordan stated, is that they are still needed by other jobs in retention. No need to create/store them again if we already have them.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.