AUX copy running very slow

  • 1 February 2021
  • 2 replies
  • 2631 views

Badge +1

We are running on V11 SP19 and all our media agents are running on Windows 2016

Initially AUX copy performance was at peak 3TB/hr to 7TB/hr, gradually it performance degrade and now AUX copy job running at  300GB/hr to 500GB/hr.

 

One this I found DDB verification job is not completing at all.

 

Any support much appreciated


2 replies

Userlevel 7
Badge +23

Hey @Vagicharla Sunilkumar,

There could be a few reasons, but there is some quick automated analysis that Commvault can do to help narrow down the bottleneck. This video is for backups but it works on auxcopy too: 

 

Over time, disk libraries with poor random I/O performance can slow down, since you are reading less ‘new’ data in a sequential form, and are randomly access blocks scattered through the disk that are unique. You could try flip the copy mode on the advanced settings on the storage policy from network optimized to disk optimized to lower the number of disk reads.

Userlevel 5
Badge +11

Hi @Vagicharla Sunilkumar,

 

Firstly, I suspect your aux copy is a disk to disk, deduplicated DASH copy correct? With deduplicated copies, the reported performance figure may fluctuate greatly due to the amount of space saving that may be happening from deduplication. 

 

This is because the reported throughput is showing how much application size has been processed rather than what the throughput of the backend transfer is.

Example, if a 1TB job was deduplicated down to 10GB and took 1 hr to process during aux copy, the throughput may show 1TB/hr however in the backend it only needed to transfer 10GB.

On the flip side, if the 1TB job was only deduplicated down to 500GB, it may take many more hours to aux copy since in the backend this job now needs to transfer 500GB of data.

 

This difference in what needs to be transferred in the backend means that data blocks that are unknown to the DDB on the destination copy will need to take longer to process (thus lower throughput) than data blocks that are already known (where only the dedupe signature needs to be processed). Thus ending up with fluctuations in the throughput of the job depending on whether it is processing unique or non-unique blocks.

 

Since you are on FR19, first quick thing to look for is the performance section in the Job Details of your Aux Copy job whilst it is running. This section shows read, write, network and SIDB speeds. 

 

This gives a good indication of potentially where the bottleneck is to troubleshoot further. Next, we should look at the CVPerfMgr logs on each of the destination MA’s. With the CVPerfMgr logging, we should always look at the total time of the stream (in seconds) and compare that with the module that took the most amount of time. 

 

If you can provide a copy of your CVPerfMgr log with job ID, I can give you a quick analysis as an example.

 

 

 

 

 

Reply