Solved

DDB verification job performance - Verification of Existing Jobs on Disk and Deduplication Database

  • 20 August 2021
  • 3 replies
  • 2722 views

Userlevel 4
Badge +13

Hi there,

I would like to ask you, in general, which elements are in play during the DDB verification? Is there communication between the DDB database and the disk library during the DDB verification job?

In documentation, there is written that "Verification cross-verifies the unique data blocks on disk with the information contained in the DDB and the CommServe database." So, it means that there is communication between media agent and master server…

 

The thing is, that in our case this verification job even Incremental takes a very long time. However, the performance of the disk with DDB database looks quite good.

icon

Best answer by Mike Struening RETIRED 20 August 2021, 16:00

View original

3 replies

Userlevel 7
Badge +23

@drPhil , DDB verification jobs can definitely take a long time. 

The best way to look at this process is to take a step back and think about how Deduplication works:

A backup runs and stores data in 3 places: 1) The CS database knows what jobs run and what archive files (on the library) they are made up of.  2) the Dedupe Database knows which archive files/blocks exist and how many jobs reference them and 3) the library itself has the archive files (the actual formatted data blocks)

What the DDB verification operation does it look at the 3 of these areas and make sure they all match.  What blocks does the CS database think these jobs will need?  Are they on the library or not?

Now we can re-read this line from the docs:

Deduplicated Data Verification cross-verifies the unique data blocks on disk with the information contained in the DDB and the CommServe database. Verifying deduplicated data ensures that all jobs that are written as unique data blocks to the storage media are valid for restore or Auxiliary Copy operations.

This is saying that we are cross referencing item 3 against what 1 and 2 expect to be there.  This way you don’t get caught flat footed if you go to restore data that is actually missing.  How does it do this?  We check the next line:

The jobs containing invalid data blocks are marked with the Failed status. These invalid unique data blocks will not be referenced by the subsequent jobs. As a result, new baseline data for the invalid unique data blocks is written to the storage media.

This way, the next time a job runs (that references these missing blocks), we write them back down.

As you can imagine, that is a LOT of work to check and it will take a LONG time.  there are some notes in that link about adjusting stream counts that might help.

Let me know if this helps!

Userlevel 4
Badge +13

@drPhil , DDB verification jobs can definitely take a long time. 

The best way to look at this process is to take a step back and think about how Deduplication works:

A backup runs and stores data in 3 places: 1) The CS database knows what jobs run and what archive files (on the library) they are made up of.  2) the Dedupe Database knows which archive files/blocks exist and how many jobs reference them and 3) the library itself has the archive files (the actual formatted data blocks)

What the DDB verification operation does it look at the 3 of these areas and make sure they all match.  What blocks does the CS database think these jobs will need?  Are they on the library or not?

Now we can re-read this line from the docs:

Deduplicated Data Verification cross-verifies the unique data blocks on disk with the information contained in the DDB and the CommServe database. Verifying deduplicated data ensures that all jobs that are written as unique data blocks to the storage media are valid for restore or Auxiliary Copy operations.

This is saying that we are cross referencing item 3 against what 1 and 2 expect to be there.  This way you don’t get caught flat footed if you go to restore data that is actually missing.  How does it do this?  We check the next line:

The jobs containing invalid data blocks are marked with the Failed status. These invalid unique data blocks will not be referenced by the subsequent jobs. As a result, new baseline data for the invalid unique data blocks is written to the storage media.

This way, the next time a job runs (that references these missing blocks), we write them back down.

As you can imagine, that is a LOT of work to check and it will take a LONG time.  there are some notes in that link about adjusting stream counts that might help.

Let me know if this helps!

 

I can't find the right words to express my gratitude for such a great explanation. Now, I understand very well how deduplication works. It is awesome! Thanks @Mike Struening !

Userlevel 7
Badge +23

I’m just as grateful for you, @drPhil !!!  Great way to end a week, eh?

Reply