Hi Commvault people,
I am looking for some advice on a recent volume migration, and Commvault’s behaviour afterwards.
So we basically upgraded a large fileserver. 60 TB of data across several volumes.
The data on “Server 1” is already under dedupe.
When we migrated the volumes to a “Server 2”, although we are obviously creating new subclients, and we have to kick-off new fulls, I was surprised to see these new backups writing large amounts of data.
Previously, we were writing minimal amounts of data, literally a few GB’s, despite the volumes being around 15TB (very small rate of change).
Given that dedupe is in play, and we already have the blocks of data held in the store, then why are we writing so many new blocks? To put this in context, we are writing maybe 30% of the application size, which is a few TB’s, compared to the previous few GB’s.
Hopefully that makes sense. Basically, i am trying to work out why Commvault is not using pre-existing blocks for these new backups.
Thanks
Question
Writing new blocks after large volumes were cutover to a new server
Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.