Skip to main content

We use RestAPI to get space usage values of our Libraries in Commvault for reporting purposes, especially Total Capacity, Free Space and Used Space.

For Disc Libraries this is no problem, but we’re currently doing testing with S3 Storage as Libraries and there we face an issue with that, as these values can’t be extracted.

In the Command Center we see the “Size on disk”

With QCommands (qlist media -l) we see the  “TOTAL DATA(GB)”

 

And with RestAPI (GET Library Details) we see “N/A”

 

 

Quota on the S3 Bucket has been set, so from my understanding this Quota should act as a “Capacity” value from a Commvault side.

 

How can we extract this data from Commvault?

Thanks for the question (and observation) @ChrisK !  Let me talk to our docs team and see if there’s a better method for you regarding Cloud Storage Libraries.

I’m not seeing much on https://api.commvault.com/.


I’m be curious to know this too, following.


I got an answer straight from dev!

You can use the below API for storage space details:

REST API - GET Storage Pool Details (commvault.com)

It is expected for S3 cloud storage, free space and capacity will be N/A and we do not consider quota as a “Capacity”.

Let me know if that helps, @Lucy too!


Thank you @Mike Struening for the update.

I feared as much, but the GET Storage Pool REST API command should be a good alternative for us.

Regarding the sizing of the S3 Buckets, what size would you say should a Bucket not exceed? In the Best Practices it’s mentioned that (for Amazon S3 anyway) one Bucket is fine for up to 500 TB, but that seems rather large → Best Practices (commvault.com)

In a previous release (11.10) the recommended size is 25 TB → Cloud Storage - Best Practices - Amazon S3 (commvault.com)

What would be your recommendation? I assume in terms of read performance (especially with restores) it’s beneficial to have multiple (smaller) Buckets?


@ChrisK , apologies for the delayed response!

I would defer to whatever the latest docs say.  Often, we are able to get more performance in later Feature Releases, so the limits go upwards!


Thank you @Mike Struening , will do.