@Ingo Maus , I got some information from our internal folks that I want to share (comments are compiled from different people). Please let me know if you need further information:
The BET calc on MCSS is a straight BET (regular on disk) x 15% head room for object storage x 12 % additional headroom for 128K dedupe size – this comes to somewhere around 28% additional headroom from the regular BET calc for disk.
- With Cloud storage – the nature of blob as small container storing up to 64 unique dedupe blocks – will increase consumption as the blobs can only be deleted after all 64 segments in the content have expired. That results in a longer delay in when the data recycles in the store. Fundamentally, we recommend planning on 20-30% additional capacity on cloud data stores in your BET projection as to what you were witnessing on the same copies on disk libraries.
- MCSS as a subscription storage service provides a metered quantity for use across the cell, if you consume the capacity at 100% then new copy jobs will be put into a pending state until you delete/recycle the space for reuse or you buy additional capacity on the license. This means users should also plan in some additional buffer, growth space into the plan to ensure they don’t hit a stoppage in operations. It is also key to monitor the usage – hint .. use the Trending Chart for Cloud Storage on the Command Center reports and use the alert reminders as you hit the critical consumption levels
Take a look and let me know if you have any questions as I did my best to compile it into a clear format.