Skip to main content

We are still new with Commvault and AWS and more specifically fitting into a REAL budget.  Recently there was a report of some drastic increase in charges for a small vpc environment.  We are at 11.28.36 and are backing up a total of 18 EC2s with 1 Full / week and 1 Incremental / day (except on the Full day).  The Storage Policy for retention is set to 30 days and 0 cycles.  We also note that AWS recently started encrypting S3 buckets for free.  We also have encryption AES 256 set,

Also so note that the S3 was originally setup with versioning.  We have since (12/2022) suspended the versioning and have plans to use a new S3 and seal off the DDB. 

We have never done a restore in this list environment.

The question is how or if Commvault uses the GET, PUT, COPY, POST & LIST functions?  The accounting people have reported the following:

******

Feb – 2023 – Total S3 Bill was ~$174

Amazon Simple Storage Service USW2-Requests-Tier1    $17.47

$0.005 per 1,000 PUT, COPY, POST, or LIST requests        3,494,075.000   Requests$17.47

Amazon Simple Storage Service USW2-Requests-Tier2    $67.20

$0.004 per 10,000 GET and all other requests                     168,000,129.000 Requests$67.20

****** And 2 months Later:

April- 2023 –S3 Bill is ~$8580

Amazon Simple Storage Service USW2-Requests-Tier1    $6,652.05

$0.005 per 1,000 PUT, COPY, POST, or LIST requests        1,330,409,529.000 Requests       $6,652.05

Amazon Simple Storage Service USW2-Requests-Tier2    $1,887.04

$0.004 per 10,000 GET and all other requests                     4,717,597,608.000 Requests       $1,887.04

******

As you can see, a huge increase in the GET, PUT, COPY, POST, or LIST.

I appreciate any thoughts with this.

Thanks

Chuck

Damian, we are NOT using synthetic full.  I see DATA AGING and DDB verification.  No Data Verification.  I do think that the AWS admin is changing another issue with AtomiQ that may have caused the larger increase.  We should have more information tomorrow on that.  The Doc you sent me is good.

 

Thanks

Chuck

Ahh ok.

 

Its going to be DDB verification - that is data verification, except instead of running it at a job level, it validates all the blocks are in good shape for the DDB. That means you dont keep reading the same blocks over and over like traditional DV (since they are shared by many jobs in a deduplication scenario).

 

I would disable DDB verification and you should see those costs reduce. The first time DDB verification runs is the biggest cost since it does a ‘full’ of all blocks. Subsequent verification should only do incremental - only scanning new blocks. In either case, this is the likely cause of the increased spend you saw and good place to start.


Damian, we are NOT using synthetic full.  I see DATA AGING and DDB verification.  No Data Verification.  I do think that the AWS admin is changing another issue with AtomiQ that may have caused the larger increase.  We should have more information tomorrow on that.  The Doc you sent me is good.

 

Thanks

Chuck


Are you running synthetic full backups? that would cause a lot of reading of metadata - not equivalent to a full restore but it will hit a lot of those API requests.

Also check if data verification jobs are running (admin type jobs).


Thanks Damian.  We are using “infrequent access” and it is our primary copy.  I will read the article.

Chuck

 


What storage class of S3 are you using? S3, S3-IA etc? might help paint the picture.

Check out the AWS whitepaper, particularly the section called “ Performing a cost optimization review “ which explains a bit about those requests.

https://documentation.commvault.com/2022e/expert/assets/pdf/AWSCloudArchitectureGuide_2022e_Edition.pdf

 

Is your cloud copy your primary or secondary copy?


Reply