Solved

Size on disk value related to cloud libraries is incorrect


Userlevel 7
Badge +16

While trying to figure out how to gather BET for charging purposes I noticed that the size on disk as displayed both in Command Center and CommCell console for cloud libraries to be incorrect. I have opened a ticket for it and in particular referred to S3 buckets, but I was wondering it other customers see the same and if it also occurs on libraries using Microsoft Azure Storage or other types/vendors. 

Please comment in case you see identify the same. I noticed it while running FR26 and FR28 (2022e).

icon

Best answer by Collin Harper 2 August 2022, 15:54

View original

11 replies

Userlevel 7
Badge +16

FYI the root cause was found and the issue was solved by enabling:

Process volume size updates for cloud mount paths

The CommCell in where we saw an issue with the media size calculation was our test environment which was down for a long period of time following an unsuccessful upgrade towards a version that was still under development. We didn't had time to fix it immediately and decided at a certain point in time to perform a full recovery using an old DR set. Commvault uses the information from its own database to calculate the media size hence the mount path contained a lot of waste. 

I still do not understand development took this approach to disable it for older CommServe installations and still showing incorrect/inaccurate information instead of resetting the figure to 0 or displaying DISABLED. 

Userlevel 7
Badge +23

Found the case and added it on the moderator side of the thread.

I’ll keep an eye on it.

Userlevel 7
Badge +23

Hey @Onno van den Berg , can you share the case number?  That way I don’t have to bug you for status updates 😁

Userlevel 7
Badge +16

Update: we turned on the setting on the 2 older CommCells and after a few days it displayed the correct figure, both running FR26 with the latest MR. Our test CCID runs FR28 and had the setting turned on already since the start but the value I'm seeing there is incorrect and way of target. I suspect the size calculation hits an issue because we have 2 cloud libraries and 1 of the 2 is attached to a MA that is turned off already for weeks. 

I also questioned the current decision to turn it off on CommCells with have been migrated from an older version and how they implemented this Instead of restten the stats to 0 or even better changing it into disabled they now show a value which is incorrect which makes it all very confusing. I also brought up the point to offer a refresh statistics button on the cloud library so you can refresh the value on-demand.

To be continued! 

Userlevel 7
Badge +16

I have opened a ticket which has been escalated to engineering already. Will keep you posted!

Userlevel 7
Badge +16

Well in that case I would have expect the value to be reset to zero or to make it more clear being replaced by disabled. Now it is reporting an incorrect value which doesn't make sense. 

Anyway I enabled it and still see a huge and thus incorrect size being displayed. So I updated the ticket will all information! Would be great if others can also lookup their presented and actual figures. 

Userlevel 2
Badge +4

Why did they decide to disable it by default??? 

 

I think I am right in saying that back in the early days of cloud a decision was made to limit anything that may incur a cost. As cloud platforms matured and cost  mechanisms were better understood api calls are peanuts in the scheme of things and the change was made to the default setting. It makes sense for that setting to have been enabled as part of a Service Pack upgrade but then we all complain to our dev guys when they do those sort of things as part of SP or FR upgrades. So the best option they have is to change it only for newly deployed platforms.

My 2cents

Niall 

Userlevel 7
Badge +16

Ok, so we have another environment that was deployed very recently running FR28. The option is enabled and I see the deviation there as well. 

Cloud library with 5 buckets reporting a size on disk of 426GB. Looking on the S3 side the sum of all buckets is 583GB. I will update the ticket with these findings as well. 

Userlevel 7
Badge +16

Dammmm you're right! I just learned something today…. Why did they decide to disable it by default??? It's indeed disabled so I will turn it on and will see what happens. 

Userlevel 4
Badge +10

Hello @Onno van den Berg 

Can you confirm Volume size Updates for Cloud Mount Paths is enabled?

In older installations this was disabled by default because it causes egress charges when we query the volumes for size but having it disabled will also skew the reporting of how much data is written and how much space is consumed. If it is off I would recommend turning it on and checking back in 24hrs to see if there is any difference.

 

Media Management Configuration: Service Configuration - https://documentation.commvault.com/11.26/expert/11022_media_management_configuration_service_configuration.html

Process volume size updates for cloud mount paths

Definition: When this option is enabled, volume size update request is sent to the cloud mount path. (The size occupied in the cloud mount path is updated in the Mount Path Properties (General) dialog box.)

The frequency of the requests is controlled by Interval (Minutes) between volume size update requests option.

Default Value: 1

Range: 0 (disabled) or 1 (enabled)

Usage: By default, this option is enabled and the size of data in the cloud mount path is updated.

This option is not applicable for cloud storage libraries in Direct Glacier.

This option is enabled by default on the following versions:

  • New CommServe server installations in SP13 (or later)

  • Upgraded CommServe server from Version 10 to Version 11 SP13 (or later)

Userlevel 7
Badge +23

@Onno van den Berg , can you share the case number?  I’d like to take a look and track it.

Thanks!

Reply