I have a customer whose backing up in a disk lib, then auxcopies to an Azure Cloud lib.
However, when customer looks at his Azure costs for the last 5 days, they spend over 100 dollars a day in Iterative read operations.
I’m trying to figure out what is reading so much from the cloud library. The total written size for a day is 200GB. why 41 million read requests? The DDB Verification is disabled for this library.
Is there anything else I should look into?
Thanks in advance.
Best answer by Carl BraultView original
@Carl Brault! Long time no speaking, hope you are doing well and good to see you being active on the community!
So a few questions back from my side:
@Onno van den Berg,
Long time indeed. Always a pleasure speaking with you my friend. Thanks for your time. So back to your questions.
So that leaves me with the live sync for VMs. But they generate a total of 200 GB of changes over 24h for all VMs. So again, do you think that running those 4 hours live sync, with a total of 200 Gb, would generate over 10 milions reads a day? 🤔
And what could I look for to validate that? Is there a log I could look into that would tell me the source for the reads?
Thanks in advance for your help. 😟
Have a great day.
@Onno van den Berg, took a while to reply but we found the issue. The problem was that the customer enabled the Hierarchical Namespace on the storage account. This prevents folders from being deleted. So during data aging, Commvault was able to delete the chunk files, but not the folders in which they were. It was then constantly retrying, for thousand and thousands of folder, thus generating millions of transactions a day.
We ended recreating storage account and AuxCopying the whole thing and now everything is back to normal.
Thanks for you help my friend.