Question

Data Aging In AWS s3

  • 16 February 2023
  • 2 replies
  • 229 views

Badge +1

Greetings,

 

We have some Aux copies that go to our AWS s3 bucket. The storage policy this is under has a 30 day on prem and 365 day cloud policy. The 30 day on prem (primary) has data aging turned on and seems to be pruning and getting rid of jobs past 30 days. I took a look at the properties of the Aux copy job though and noticed that the check box for data aging was not selected. When I view all jobs for this Aux copy, it showed jobs back from years ago unfortunately. So that tells me that nothing is aging out or getting cleaned up. 

Our s3 bucket is getting very large and we need to clean up all of these old jobs to bring it down to a reasonable size. My question is how best to do this clean up? Can I view the jobs under the Aux copy and then just select all of them past our retention and delete? Would this delete data out of the s3 bucket also if I did this? 

I did select the data aging check box now and hit ok, then ran a data aging job from the commcell root and just ran it against our cloud library. That didn’t seem to do much. The data aging job ran for about 4 minutes and completed successfully, but it didn’t look like it cleaned out any of the jobs, nor does it show it aged out anything from what I can see. 

What’s the best way to quickly clean this old Aux/cloud stuff up?

Edit - I should add that this data goes straight to Glacier. The process is that the last full backup on prem will get sent to the s3 bucket under the Glacier storage class. 

Thanks


2 replies

Userlevel 7
Badge +23

hey @barcode 

It was all looking a bit strange until I saw the mention of glacier ….

The data aging job is a logical one - it marks jobs to be pruned based on rules and then initiates the process on the media agent to begin pruning - it does not wait for the physical deletions to complete, so that makes sense that the job only took 4 minutes.

Direct to glacier isn't really recommended much anymore - we don't allow creation of new storage libraries using it because the S3 glacier option is superior as noted on this page.

In either case, can you check if you have a glacier vault lock policy on in AWS which may be preventing deletions?

 

 

Badge +1

Hello Damian,

Thanks for this info. I can confirm that there is no locks on these folders/files. WORM isn’t set in Commvault on this, and Object lock isn’t turned on for the bucket at all. I just double checked the properties of a couple files in there also to make sure it didn’t have a retention flag/lock on them. 

Anything else that might be locking this or preventing deletion? 

Reply