Hello Commvault Community,
Today I come with a question about the Commvault deduplication mechanism.
We noticed that there are two deduplication base engines with identical values but differing in one parameter - unique blocks.
The difference between these engines is close to 1 billion unique blocks, where other values are almost identical to each other. Where could this difference come from? Is there any explainable reason why there is such a difference considering the rest of the parameters?
DASH Copy is enabled between the two deduplication database engines that are managed by different Media Agents.
Below I am sending examples from the other two DDB engines where the situation looks correct - the DASH Copy mechanism is also enabled.
I am asking for help in answering what may be caused by such differences in the number of unique blocks between DDB engines.
Another issue is whether, in the case of this deduplication database, we are in any way reducing the disk space? Currently, there is 17% of free space left. DDB Compacting and Garbage Collection enabled, suggested adding Partitions or adding extra storage space. Maybe there is some way to reduce the space or we can only add it Storage space - Seal is not an option due to the size of the DDB.
Thank you for your help.
Best answer by KamilView original