Solved

Basic Setting for Media Agent


Userlevel 2
Badge +7

Hi team, can share me what are the recommended ‘additional setting’ value should add into the Media Agents for Backup/Aux/Restore job performance? is there any guide over internet can share me? Thanks in advance :) 

icon

Best answer by Scott Moseman 20 June 2022, 21:18

View original

10 replies

Userlevel 7
Badge +16

In general there is no need, system resources are being managed automatically.
Assuming your hardware is based on the correct sizing.
What are your concerns within your environment, maybe we can advise you if we have some more context.

Userlevel 2
Badge +7

Hi @Jos Meijer , for example, currently my Tape library Aux copy running with 99% in stage of finish the Job.
i see the pending data is about 1TB. but the throughput is very slow which running at 4<>10 GB/hr. hw i can improve the speed or what are the setting i should refer? why the earlier it was able to run very fast then at the end it make it slow where at the stage 99%. There is no other job are running at the moment. 

Userlevel 6
Badge +14

Hi @Raj Balaraj ,

 

Were there any changes in the environment before you started to experience degraded performance?

 

I’d suggest checking the performance statistics reported in the logs here for the respective Jobs.

  • In many backup/restore logs you can look for “stat-” counters, These show the performance of the modules performing activity inside the process.
  • On the Media Agent we have a “CVperMgr.log” log which captures performance stats for some Jobs.
  • On the MA we also have “CVD.log” which tells us the read/write speeds for Storage and Dedup, Look for “stat-” or “stats:” in the log file. - You may also see counters for Dedup, CRC and Network when filtering the log for the string: “Head” 

Also, I’d suggest checking the performance/perf statistics of the MA and Storage (Disk/Tape) here as this may potentially uncover a bottleneck.

 

If this doesn’t help then I’d suggest opening up a support case so we can take a deeper look.

 

Best Regards,

Michael

Userlevel 6
Badge +17

i see the pending data is about 1TB. but the throughput is very slow which running at 4<>10 GB/hr. hw i can improve the speed or what are the setting i should refer? why the earlier it was able to run very fast then at the end it make it slow where at the stage 99%.


How many streams are currently running for copying that 1TB of data?
My guess is the other streams completed and we’re down to 1 stream.

Thanks,
Scott
 

Userlevel 7
Badge +19

Please stay away from adding additional settings. If there is something unusual going on when it comes to performance then first see if you can find bottleneck and or system-wide problems and check previous job results. if you can't find anything useful then open a ticket and let someone from support do a check. 

Userlevel 2
Badge +7

@Scott Moseman currently it is running with 1 reader and 1 stream. 

Userlevel 2
Badge +7

Hi All, as per suggest i will not add any additional setting and will get the logs and do the checking on the Tape library performance. Thanks team. 

Userlevel 6
Badge +17

@Scott Moseman currently it is running with 1 reader and 1 stream. 


Generally more streams means more performance -- obviously to a certain point.

I imagine this 1TB is coming from a single backup job?  What type of backup job?  An aux copy will only have as many streams as the backup job.  If there’s a backup job which might benefit from breaking into multiple streams, it can help speed up the aux copies.

For example, Synth Full jobs are single stream unless you’re using Indexing v2.  This is a classic scenario where people have large, single stream jobs.

Thanks,
Scott
 

Userlevel 2
Badge +7

@Scott Moseman it is single backup job for file backup from storage. it is NDMP backup from primary to Tape. 

Userlevel 6
Badge +17

@Scott Moseman it is single backup job for file backup from storage. it is NDMP backup from primary to Tape. 


You might see if your NDMP array supports multiple streams per subclient.  If not, your other option could be breaking the data up until more subclients.  Either way, more streams will typically can help run those jobs to tape quicker. 

https://documentation.commvault.com/11.24/expert/129970_ndmp_agent_feature_support_by_vendor.html
https://documentation.commvault.com/11.24/expert/19631_configuring_multiple_streams_for_backups.html

Thanks,
Scott

Reply