Solved

HotAdd backup performance tuning

  • 29 November 2021
  • 8 replies
  • 72 views

Userlevel 1
Badge +6

Hi,

 

We have setup a Commvault Media Agent to backup VMs using HotAdd transport mode.

The setup works but its average throughput is less than ideal.

The below is the setup of the Media Agent

  • Directly FC connected to a LTO-8 tape library with 4 FC drives
  • 16 CPU cores
  • 32 GB memory

 

With the same above SPEC in our Netbackup environment, it can achieve an average throughput of 300MB/Sec, or 1TB/Hour.

 

However, this Commvault Media Agent could only achieve around 500 GB/Hour, which is only half of what its potential should be.

 

That being said, in Netbackup, we did increase its Number_Data_Buffers and Size_Data_Buffers to maximise its performance, so I’m wondering if there are similar parameters in Commvault that I can tweak to make it run faster ?

 

Thanks,

Kelvin

icon

Best answer by Kelvin 10 January 2022, 15:38

View original

8 replies

Userlevel 4
Badge +7

Hello Kelvin,

As far as writing the data to the library you can check out combining streams and multiplexing while writing to a tape library to increase performance.

Streams Overview (commvault.com)

Data Multiplexing - Overview (commvault.com)

 

From the VMware/backup end we can increase the amount of data readers that are running at once. Increasing data readers basically controls the amount of VMDK disks that we will back up at once.

To change this right-click on the subclient and go to “advanced options” and increase readers. 

 

Userlevel 1
Badge +6

Hi @Dan White ,

I’ve already tested/tried that, e.g. from 5 streams to 18 streams by tweaking the below 3 parameters

  • Number of Data Reader
  • DEvice Streams
  • Multiplexing factors

The testing results showed no significant difference.

BTW, at the moment, the setting is 20 Data Reader, 4 Device Streams and 5 Multiplexing factor, which gives us a maximum of 20 streams per FC Drive.

That’s why I’m thinking if this has got something to do with the default buffer size ?

 

Regards,

Kelvin

Userlevel 7
Badge +23

@Kelvin , did you ever get an answer for this?  If not, likely a good idea to open a support case.

Userlevel 1
Badge +6

Hi Mike,

No, I didn’t get an answer but it’s not very important now because the speed has improved greatly after I changed the Proxy Server HDD SCSI type back to LSI Logic SAS.

 

Cheers,

Kelvin

Userlevel 7
Badge +23

Oh, that’s at least a happy ending!

Do you want to create a case to pursue, or mark this as closed?

Whatever works best for you, of course :nerd:

Userlevel 7
Badge +15

Hi Mike,

No, I didn’t get an answer but it’s not very important now because the speed has improved greatly after I changed the Proxy Server HDD SCSI type back to LSI Logic SAS.

 

Cheers,

Kelvin

Did you experiment with paravirtual is is that what you moved away from?

Userlevel 1
Badge +6

Hi Mike,

Let’s mark it closed.

Cheers,

Kelvin

Userlevel 1
Badge +6

@Damian Andre 

Yes, it’s correct. I moved back from paravirtual to LSI Logic SAS, and after that, the speed has improved twofold.

Reply