Solved

Dedup Data Written expands with new DD-engine


Userlevel 1
Badge +4

Hi, 

We are setting up a new Windows MA with new DD-Engine and a new DiskLib. The DDB disk has been configured with recommended 32k block size and the DiskLib volumes with 64k block size. 

We have run a new DASH Copy in the SP to create a new dedup baseline from the previous backed up data. And the plan is to make this DASH Copy the new Primary once all data is available. 

A standard re-baseline operation, that usually get better dedup ratio with an updated DD-Engine. 

The backup data is mostly Hyper-V vm’s.

 

In this case the Data Written in the old copy was 9TB and the Copy stopped att 18TB in the new MA as it ran out of resources - DiskLib Full. 

This was unexpected. 

Now we are thinking about doing Move DDB and MountPath but if there is a general Deduplication degration we are not sure if this will work, and if we will run into the same issues again. 

Does anyone have a similar experience or knowledge about this issue and a recommendation on how to move forward?

I have done both new baseline and move DDB many times and always with better dedup ratio in the new MA, I was expecting that here as well. 

Appreciate feedback, 

/Patrik   

icon

Best answer by Mike Struening 18 May 2021, 16:03

@Patrik , there’s an update in 11.20.25 that addresses Dedupe space issues and another in 11.20.19 for Primary and Dash copy size differences:

  • Get storage pool details API may not return totalAppSize, totalDataSize and DeDupSavingSize under "DDBDetails" tag Form ID 2790
  • Space optimized Auxcopy job may copy jobs out of order if there are partial jobs.  Deduplication ratio may be different with similar amount of jobs between primary and dash copy 2473, 2474

I would upgrade accordingly before retrying.

View original

6 replies

Userlevel 7
Badge +15

@Patrik , I asked around here and the initial suspicion is a possible product issue.  Can you confirm your version on the CS and (involved) MA(s)?

Userlevel 1
Badge +4

Thanks Mike, 

CS and all MAs are all on 11.20.9. 

Maybe we will try to promote a new empty Primary Copy and start with a new client baseline. In that way we will can keep the old Primary Copy for a while and go back to it if we see the same poor dedup. 

We do have a secondary copy offsite so we will not cause data loss. 

/Patrik

Userlevel 7
Badge +15

@Patrik , there’s an update in 11.20.25 that addresses Dedupe space issues and another in 11.20.19 for Primary and Dash copy size differences:

  • Get storage pool details API may not return totalAppSize, totalDataSize and DeDupSavingSize under "DDBDetails" tag Form ID 2790
  • Space optimized Auxcopy job may copy jobs out of order if there are partial jobs.  Deduplication ratio may be different with similar amount of jobs between primary and dash copy 2473, 2474

I would upgrade accordingly before retrying.

Userlevel 1
Badge +4

Thanks Mike, 

We skipped the Auxcopy process to switch the Primary Copy and just created a new empty Primary Copy and are currently initiating new Full backups to the new STG_POOL. The retention was only 28 days and just a small amount of clients were associated to it. The secondary copy holds one year extended retained data though so we wanted to reuse the SP.  

We will monitor and report back if the dedup is as expected. 

I would suspect the an update possibly would correct the issue with the Auxcopy as you suggest. I have done this so many times with hw refreshes and it always creates a better dedup baseline. 

/Patrik 

Userlevel 7
Badge +15

Appreciate the reply!  Keep me posted and we’ll go from there.

Userlevel 7
Badge +15

Hey @Patrik!  following up to see if you have any progress to share.

thanks :grin:

Reply