Question

Writing new blocks after large volumes were cutover to a new server

  • 27 March 2024
  • 6 replies
  • 26 views

Userlevel 2
Badge +6

Hi Commvault people,

I am looking for some advice on a recent volume migration, and Commvault’s behaviour afterwards.

So we basically upgraded a large fileserver. 60 TB of data across several volumes.
The data on “Server 1” is already under dedupe.

When we migrated the volumes to a “Server 2”, although we are obviously creating new subclients, and we have to kick-off new fulls, I was surprised to see these new backups writing large amounts of data.

Previously, we were writing minimal amounts of data, literally a few GB’s, despite the volumes being around 15TB (very small rate of change).

Given that dedupe is in play, and we already have the blocks of data held in the store, then why are we writing so many new blocks? To put this in context, we are writing maybe 30% of the application size, which is a few TB’s, compared to the previous few GB’s.

Hopefully that makes sense. Basically, i am trying to work out why Commvault is not using pre-existing blocks for these new backups.

Thanks


6 replies

Userlevel 2
Badge +8

Hi @MountainGoat,

 

Can you please confirm if same deduplication engine is used for Server 2 as well ?

 

Regards,

Suleman 

Userlevel 2
Badge +6

Same SP, same DDB, same everything.

Userlevel 2
Badge +6

Would it be somehow related to the previous backups being Syn-Fulls whereas you cannot run a Syn-full for the first ever backup of a subclient. Has to be full ….

Userlevel 2
Badge +8

Hi @MountainGoat ,

 

Can be a possibility but if Data is same, there should be more savings. Is there a possibilty that the data which is being backed up is different/unique ?

 

Regards,

Suleman

Userlevel 6
Badge +18

Are both the old and new subclients using the same “block level” settings?

Thanks,
Scott
 

Badge +3

@MountainGoat , even though we are using the same SP, it is advisable to confirm if it is same DDB engine and no horizontal scaling of DDBs (where new engines will be spawned).

In usual scenario, since all the blocks from the volume on “server1” are already available, the new data should be deduped against those blocks.

Reply