Skip to main content
Question

Writing new blocks after large volumes were cutover to a new server


Forum|alt.badge.img+6

Hi Commvault people,

I am looking for some advice on a recent volume migration, and Commvault’s behaviour afterwards.

So we basically upgraded a large fileserver. 60 TB of data across several volumes.
The data on “Server 1” is already under dedupe.

When we migrated the volumes to a “Server 2”, although we are obviously creating new subclients, and we have to kick-off new fulls, I was surprised to see these new backups writing large amounts of data.

Previously, we were writing minimal amounts of data, literally a few GB’s, despite the volumes being around 15TB (very small rate of change).

Given that dedupe is in play, and we already have the blocks of data held in the store, then why are we writing so many new blocks? To put this in context, we are writing maybe 30% of the application size, which is a few TB’s, compared to the previous few GB’s.

Hopefully that makes sense. Basically, i am trying to work out why Commvault is not using pre-existing blocks for these new backups.

Thanks

6 replies

Forum|alt.badge.img+9
  • Vaulter
  • 125 replies
  • March 27, 2024

Hi @MountainGoat,

 

Can you please confirm if same deduplication engine is used for Server 2 as well ?

 

Regards,

Suleman 


Forum|alt.badge.img+6

Same SP, same DDB, same everything.


Forum|alt.badge.img+6

Would it be somehow related to the previous backups being Syn-Fulls whereas you cannot run a Syn-full for the first ever backup of a subclient. Has to be full ….


Forum|alt.badge.img+9
  • Vaulter
  • 125 replies
  • March 28, 2024

Hi @MountainGoat ,

 

Can be a possibility but if Data is same, there should be more savings. Is there a possibilty that the data which is being backed up is different/unique ?

 

Regards,

Suleman


Scott Moseman
Vaulter
Forum|alt.badge.img+18

Are both the old and new subclients using the same “block level” settings?

Thanks,
Scott
 


Forum|alt.badge.img+8

@MountainGoat , even though we are using the same SP, it is advisable to confirm if it is same DDB engine and no horizontal scaling of DDBs (where new engines will be spawned).

In usual scenario, since all the blocks from the volume on “server1” are already available, the new data should be deduped against those blocks.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings