Solved

DDB Verification Schedule Policy

  • 22 June 2021
  • 4 replies
  • 1475 views

Userlevel 4
Badge +13

Hi there!

 

There is a System Created DDB Verification schedule policy (Data Verification). In our case it starts everyday at 6AM. Is it possible to decrease the frequency of the schedule to e.g. once a week without any risk?

What is the optimum value of the System Created DDB Verification schedule policy? I am asking because there is a quit big amount of data to be proceess during this task, which can reduce the performance of other tasks.

icon

Best answer by Laurent 22 June 2021, 10:46

View original

4 replies

Userlevel 4
Badge +13

Thanks @Blaine Williams and @Laurent for your comments. Now, this topic is much clearer for me.

Userlevel 6
Badge +15

Exactly @drPhil  ! :slight_smile:

Or if you wish to start tuning, you can maybe change the starting time, to make sure your resources have the least things to do at that time, or even split the schedules to have one data verification for the GDSP1 and another schedule a bit later for GDSP2 when the other is close to complete, to avoid the both running at the same time. 

It all depends on your infrastructure and its capacity to process parallel I/Os as this is really read-intensive process.

Regarding the amount of data to process, the more data has been written, the more data would have to be processed. If you don’t perform any backup at all, the incremental data verification will have nothing to do.

So if you expand the time between two DDB verification jobs, then you would have 1) more data to process and also 2) higher probability/risk to have not OK blocks still beeing referenced and not marked as invalid.  

Userlevel 4
Badge +13

Hi @Blaine Williams ! I have been checking this history already and it seems to me, that the Total Data Size is quite huge - sometimes more than 60 TB. I would expect much lower number… Maybe it depends on the size of the primary backups, that are being performed... and in our case in total (for all policies) cca 100TB. However, if it is default schedule policy, most likely, it is better to let it as it is, isnt it?

 

 

Userlevel 5
Badge +8

 

Hi DrPhil, 

Deduplicated Data Verification cross-verifies the unique data blocks on disk with the information contained in the DDB and the CommServe database. Verifying deduplicated data ensures that all jobs that are written as unique data blocks to the storage media are valid for restore or Auxiliary Copy operations.

The jobs containing invalid data blocks are marked with the Failed status. These invalid unique data blocks will not be referenced by the subsequent jobs. As a result, new baseline data for the invalid unique data blocks is written to the storage media.

https://documentation.commvault.com/commvault/v11_sp20/article?p=12567.htm

 

You can by all means change the schedule but the verification is there to check your data and confirm that blocks being referenced are ok. This coupled with the space reclaim keeps things tidy and an in order. 

By default, the deduplicated data verification is automatically associated with the System Created DDB Verification schedule policy. This schedule policy runs an incremental deduplicated data verification job every day. So this should only be running against the changes since the last verification and shouldn’t be to intense. 

Have you review the job history for the DDB Verifications. Are you seeing fluctuating times? how long are they taking? Is there one problem one? 

Viewing DDB Verf History https://documentation.commvault.com/commvault/v11_sp20/article?p=129004.htm

 

Reply