One of the things that is going to blow up here in my company soon is some jobs are taking over 24 hours to backup x amount of data. the clients which have huge durations are usually windows or linux file systems with variences of data being backed up from 2 TB to 8 TB+. My systems and networking department don't like to help unless all evidences point to an actual problem with either the network, or the server itself that is causing the latency of duration.
Is there an easy way on commvault’s end to point out that “this client is running at this x GB/Hr, compared to when it was X GB/Hr” and prove that its not the backup software is causing the issue? I know when a active job runs it shows average throughput, and when double clicking on the job id it shows a percentage (Read: X%, Network X%) but I don't know if that's showing the percentage of what's causing the job s duration or what its trying to show.
I want to avoid tidious work of going back and forth with my departments on each individual server that is having higher then usuall duration for backups with out making an excell sheet of the backup history showing where it started to become slow, and a ticket everytime with commvault to prove its not commvault.
Best answer by Mike Struening RETIREDView original