What is confusing about this topic is that Commvault seem to be sending mixed messages regarding the topic of extended retention.
From v10, the recommendation was to use separate selective storage policy copies (and therefore separate DDBs) for jobs with longer retention to avoid issues with the DDB size and performance. This was where that warning message in the Commvault Admin Console originated. This however leads to a massive increase in the amount of back-end storage consumed as you have a seperate baseline for each storage policy copy you create.
However in recent documention, when describing how to configure Plans from the Commvault Command Center, the documentation specifically states that one should use extended retention rules to create a hierarchical retention scheme (e.g to retain month-end backup jobs for longer. see https://documentation.commvault.com/11.23/essential/131386_extended_retention_rules.html
Seeing as this contradicts the message from the Admin Console does this mean that Commvault has changed their tune regarding extended retention rules and the message in the Admin Console should now be ignored as obsolete?
Or are there specific cases when one should use a seperate copy for long term data? The obvious case is when one wants to send the long retention data to an archive tier storage platform (e.g. tape or cold tier public cloud storage). But what about if the extended retention is greater than a certain number of days or where the where the amount of deduplicated back-end storage is greater than a certain size (either of which could cause the DDB partitions to be excessively large)? If this is the case Commvault should be providing guidelines for their users.
Best answer by Mike Struening
Sharing a just published article that answers these concerns:
Feel free to comment and discuss via the link above!