CS Ver FR 24
I have on on prem S3 solution. I have presented this to Commvault as a Library, with multiple buckets as mount points. Within Commvault I have limited the size of these buckets to 100TB (performance tuning for library, decided not to limit buckets on storage side). You can see, size on disk of data in the bucket
Commvault therefore has enough information to calculate Capacity, Free Space, and Usable Free space when I view my library and mountpath stats, but I see nothing.
Best answer by Onno van den BergView original
Commvault definitely does not see ‘do not consume’ as the same as free space. You could do the same on a disk path on D:\ with 200 GB of free space but limit the usage to 100 GB - the actual capacity is still 200 GB but you’ve capped how much is usable. If you reach the cap, you could increase it by a few GB temporarily, knowing how much you have in reserve from the free space metric.
I see your point, though, and it would be good to set some sort of manual capacity value so it could be monitored in the software, but that option does not exist.
Since this is a cloud library, generally it is considered infinite storage so Commvault does not try to calculate it. May I ask what library type you added during setup? If it’s one of the known on-prem S3 compatible types, I believe we still do the space calculations however if it’s generic, we treat as actual in cloud S3.
Device is Cloudian Hyperstore (that is the API I am using)
Many cloud providers recommend that you limit the number of objects in your buckets. As this option is not available in CommVault, I have picked a bucket size as my limiter. The challenge I have is that I want to only add buckets as I am approaching my max limit so I do not spread data over too many buckets.
In this scenario my physical Library capacity is irrelevant, it is nearing the max capacity limit (number of buckets x max bucket size) that I’m interested in.
What I meant was, during when you added library, did you select the option in Commvault as Cloudian or as S3 compatible?
anyhow what you can do is monitor the bucket size and once you reach the soft limit you just freeze the bucket for new writes and add another one. by default most of them already recommend the creation of multiple buckets to spread the data across the buckets.
please also check my post from a few days ago in regards to the "size on disk” value