Solved

Azure cloud lib with significant iterative read operations

  • 5 January 2023
  • 3 replies
  • 152 views

Userlevel 2
Badge +7

Hello community,

I have a customer whose backing up in a disk lib, then auxcopies to an Azure Cloud lib.

However, when customer looks at his Azure costs for the last 5 days, they spend over 100 dollars a day in Iterative read operations. 

I’m trying to figure out what is reading so much from the cloud library. The total written size for a day is 200GB. why 41 million read requests? The DDB Verification is disabled for this library.

Is there anything else I should look into?

Thanks in advance.

icon

Best answer by Carl Brault 14 February 2023, 16:43

View original

3 replies

Userlevel 7
Badge +19

Hi @Carl Brault! Long time no speaking, hope you are doing well and good to see you being active on the community! 

So a few questions back from my side:

  • Which version are they running and what is the version of the installed maintenance release?
  • Any planned/scheduled taks who are running frequently like restores?
  • Could it be that there is another copy which used the Azure cloud library as source while it could also use the disk library?
  • Did you looked into data verification?
Userlevel 2
Badge +7

Hi @Onno van den Berg,

Long time indeed. Always a pleasure speaking with you my friend. Thanks for your time. So back to your questions.

  • Which version are they running and what is the version of the installed maintenance release? I would have to connect to the customer’s environment. But I believe it is 11.28
  • Any planned/scheduled taks who are running frequently like restores? I know they use VM live sync, every 4 hours. But would that be enough to generate 41 million reads in 4 days? I thought it should much lower, but do you think that could be it?
  • Could it be that there is another copy which used the Azure cloud library as source while it could also use the disk library? No, there isn’t.
  • Did you looked into data verification? I thought of the default DDB Verification schedule. I thought that could generate lots of reads, but the schedule was disabled for that DDB/copy.

So that leaves me with the live sync for VMs. But they generate a total of 200 GB of changes over 24h for all VMs. So again, do you think that running those 4 hours live sync, with a total of 200 Gb, would generate over 10 milions reads a day? 🤔

And what could I look for to validate that? Is there a log I could look into that would tell me the source for the reads?

Thanks in advance for your help. 😟

Have a great day.

Userlevel 2
Badge +7

Hi again @Onno van den Berg, took a while to reply but we found the issue. The problem was that the customer enabled the Hierarchical Namespace on the storage account. This prevents folders from being deleted. So during data aging, Commvault was able to delete the chunk files, but not the folders in which they were. It was then constantly retrying, for thousand and thousands of folder, thus generating millions of transactions a day.

We ended recreating storage account and AuxCopying the whole thing and now everything is back to normal.

Thanks for you help my friend.

Reply