Just a curiosity...Basically:
Does Commvault increase/decrease the throughput (of a job, or per stream) of an Aux copy based on internal things like “how much data has to be copied” and/or “when the next job to run is”? or anything similar? Essentially: If no external bottlenecks existed for an Aux copy, would CommVault go as fast as it can to Aux copy, or would it internally throttle based on “need to not go as fast as possible”? if so, is there a way to see these metrics/know when its ocurring?
Put another way: I’m wondering if Commvault, having “a few TB to copy” for a job, decides ‘yes, i can easily copy this in x hours before the next job, so therefore, no need to run at max performance for this job”, and then on another day, say there’s 100 TB to copy, so it determines” I need to ramp up the throughput for this job if I’m going to be able to finish this in time”?
Another way to put this is: I’m not looking for a “yes/no” if “job time or data amount to copy” is literally the exact metrics used for throttling/bursting/scheduling for throughput. I’m looking more for a “does CommVault ramp up/down throughput of a job on its own, based on any metrics” or does it run all Aux copy jobs/streams “as fast as it can” based on available resources (so its always being throttled/reduced/bottlenecked by external resources, or its own CPU/memory of the Media Agents, not its own purposefully throttled via internal metrics/gates). I’ve seen Aux copy jobs move slow and its usually said (via these forums) its usually “external throttling, run X tool to see if its network or storage, etc)… but.. if there were no external bottlenecks, does CommVault slow its own jobs down for any reason when resources are available?