Question

Aux Copy performance: Does CommVault "throttle" or "pace" itself, in terms of total throughput for a job?

  • 20 March 2023
  • 1 reply
  • 205 views

Userlevel 2
Badge +10

Just a curiosity...Basically:

Does Commvault increase/decrease the throughput (of a job, or per stream) of an Aux copy based on internal things like “how much data has to be copied” and/or “when the next job to run is”?  or anything similar? Essentially: If no external bottlenecks existed for an Aux copy, would CommVault go as fast as it can to Aux copy, or would it internally throttle based on “need to not go as fast as possible”?  if so, is there a way to see these metrics/know when its ocurring?

Put another way: I’m wondering if Commvault, having “a few TB to copy” for a job, decides ‘yes, i can easily copy this in x hours before the next job, so therefore, no need to run at max performance for this job”, and then on another day, say there’s 100 TB to copy, so it determines” I need to ramp up the throughput for this job if I’m going to be able to finish this in time”?

Another way to put this is: I’m not looking for a “yes/no” if “job time or data amount to copy” is literally the exact metrics used for throttling/bursting/scheduling for throughput.  I’m looking more for a “does CommVault ramp up/down throughput of a job on its own, based on any metrics” or does it run all Aux copy jobs/streams “as fast as it can” based on available resources (so its always being throttled/reduced/bottlenecked by external resources, or its own CPU/memory of the Media Agents, not its own purposefully throttled via internal metrics/gates). I’ve seen Aux copy jobs move slow and its usually said (via these forums) its usually “external throttling, run X tool to see if its network or storage, etc)… but.. if there were no external bottlenecks, does CommVault slow its own jobs down for any reason when resources are available?

 

 

 


1 reply

Userlevel 3
Badge +9

Hello @tigger2 

AuxCopy performance is affected majorly by network bandwidth and chunk sizes.

i.e., if the network bandwidth is less even with huge parallelism the job would run for longer than expected. there is no way to fix this in CommVault apart from using dedup which may ease up things but permanent solution for this is get enough bandwidth. Get a baseline of bandwidth and estimate if the aux copy can complete in 6-8 hrs for everyday load. Anything above 8hrs need to streamlined.

i.e., another this is chunk sizes, if a backup has huge size say 1.5TB of single chunk, whereas other jobs have smaller size, the job may still run long due to huge chunk able to run few streams to copy over.

The way to fix this is change backup type (daily Fulls to weekly Fulls if possible), Use dedup if possible, use auto synth full to distribute the full backup load over all days. 

Reply