I would say it depends ;-) Are you referring to a local solution offering S3-compatible object storage, or are you referring to Amazon S3. In case of a local solution are you running it dispersed across multiple sites? What solution are we talking about in that case; StorageGRID/Cloudian?
What we have seen ourselves is that configuring the option "do not deduplicate against object older than X days” when using S3(-compatible object storage) definitely helps in improving the storage efficiency.
To come back to some points:
S3 performance can be massive, but it al depends on the amount of streams that you can throw at it. If you run it locally than it really depends on the infrastructure, especially the amount of nodes and the network setup and performance that it can deliver. Of course the MAs also will have to be able to push all bandwidth but we have seen good performance. We do not use disk libraries at all anymore because:
- Much less susceptible to ransomware
- Some offer S3 object lock functionality to increase security and to mitigate malicious intent and ransomware
- No need to do AuxCopies to have the data stored HA across sites (I do recommend to add a secondary copy to public cloud or for example a tape-out) in case you have a solution running across dispersed sites.
- If you do not have S3 storage internally already than adding it can deliver value to other businesses internally as well.
- Easier expansions
- Able to deliver massive throughout but latency is most of the times much higher in comparison to a regular disk library that has been sized correctly on both capacity and performance.
There was an option (can't find it anymore) that you could turn on, that would use a disk library to cache the metadata of deduplicated data. That would already help improving performance as it reduces the fast amount if small gets, but it seems it was removed from the product. I played with it in the past but we use partitioned DDBs in the cloud in where we do not have shared storage to facilitate te cache across all involved MAs and you could only define one library so that was not possible in our case or we have to implement all kinds of solution which would take away the benefit of the cache so for us it was not a valid option. I was more looking for an option that would just use a "folder” on the MA that you could designate as a cache device and Commvault would than pre-warm the cache by fetching all related metadata to the local cache and by keeping it up-to-date at all times. That would remove the disk library and it would take away the need to have shared storage to really benefit from it. Maybe it comes back in the future, would be nice as I think cloud storage will become the number one used storage at some point in time.