@thomas.SÂ , is the actual throughput the issue or is the amount of initial data the problem?
Starting with the latter, what is the intended retention on the Aux Copy, and how far back do the To Be Copied jobs go? the reason I ask is that it’s entirely possible that the aux Copy is grabbing data it will want to age off once the whole thing completes.
If it’s a performance issue, then we’d need to see some log files and stats to see if the issue is the read speed, the network/transfer or the write speed. Noting the 2x 10 Gbit cards, are you certain the job is using this interface?
Hello @Mike Struening,Â
Â
The problem is currently the throughput from my point of view.
The job currently runs every 3 hours and mainly copies the logs of the databases to the object storage during the day. Overnight, the data from the VSA backup is added. That adds up to a few TB.Â
Tomorrow I can provide your log, which shows the performance data.Â
I am sure that it uses LAN because the object storage is only accessible via LAN and nothing in this direction is zoned via FC to the media agents.
Regards
Thomas
Sounds good. I’ll add in some people to advise where we can find the performance counters as well.
@thomas.S , check CVperfmgr.log on the destination MA for performance metrics. This will advise where to focus.
I have collected the logs for the Aux Copy job. I only left the information in that related to the job ID.Â
Since these jobs are not so big I hope that you can already read out something here. I had to deactivate the big jobs first, because otherwise I get problems with the space on the disk library.
Thomas
Thanks, @thomas.SÂ !
I checked a few of the stream counters and it looks like the network is the cause.
If you check the column for ‘Time(seconds), that is the time the stream/pipe had to wait for data. In some cases, we’re waiting a minute or two.
The one below has some high wait times, though there are several pipes per MA.
Â
3996 Â 6720 Â 05/05 15:03:02 2996475Â
|*5852487*|*Perf*|2996475| =======================================================================================
|*5852487*|*Perf*|2996475| Job-ID: 2996475 Â Â Â Â Â Â Pipe-ID: 5852487] Â Â Â Â Â Â 4App-Type: 0] Â Â Â Â Â Â pData-Type: 1]
|*5852487*|*Perf*|2996475| Stream Source: Â cvmapapp01
|*5852487*|*Perf*|2996475| Network medium: Â SDT
|*5852487*|*Perf*|2996475| Head duration (Local): Â 605,May,21 15:01:01 Â ~ Â 05,May,21 15:03:02] 00:02:01 (121)
|*5852487*|*Perf*|2996475| Tail duration (Local): Â 205,May,21 15:01:01 Â ~ Â 05,May,21 15:03:02] 00:02:01 (121)
|*5852487*|*Perf*|2996475| -----------------------------------------------------------------------------------------------------
|*5852487*|*Perf*|2996475|   Perf-Counter                  Time(seconds)        Size
|*5852487*|*Perf*|2996475| -----------------------------------------------------------------------------------------------------
|*5852487*|*Perf*|2996475|Â
|*5852487*|*Perf*|2996475| Replicator DashCopy
|*5852487*|*Perf*|2996475| Â |_Buffer allocation............................ Â Â Â Â 81 Â Â Â Â Â Â Â Â Â Â Â Â Â Â .Samples - 21079] .Avg - 0.003843]
|*5852487*|*Perf*|2996475| Â |_Media Open................................... Â Â Â Â 6 Â Â Â Â Â Â Â Â Â Â Â Â Â Â MSamples - 15] .Avg - 0.400000]
|*5852487*|*Perf*|2996475| Â |_Chunk Recv................................... Â Â Â Â 5 Â Â Â Â Â Â Â Â Â Â Â Â Â Â PSamples - 3] Avg - 1.666667]
|*5852487*|*Perf*|2996475| Â |_Reader....................................... Â Â Â Â 7 Â Â Â Â Â Â Â Â 1110032163 Â >1.03 GB] 8531.67 GBPH]
|*5852487*|*Perf*|2996475|Â
|*5852487*|*Perf*|2996475| Reader Pipeline Modules0Client]
|*5852487*|*Perf*|2996475| Â |_CVA Wait to received data from reader........ Â Â Â 119 Â Â Â Â Â Â Â Â Â Â Â Â Â
|*5852487*|*Perf*|2996475| Â |_CVA Buffer allocation........................ Â Â Â Â - Â Â Â Â Â Â Â Â Â Â Â Â Â
|*5852487*|*Perf*|2996475| Â |_SDT: Receive Data............................ Â Â Â Â 7 Â Â Â Â Â Â Â Â 1111164840 Â 1.03 GB] Â Samples - 21113] Avg - 0.000332] b532.21 GBPH]
|*5852487*|*Perf*|2996475| Â |_SDT-Head: CRC32 update....................... Â Â Â Â 1 Â Â Â Â Â Â Â Â 1111107304 Â [1.03 GB] Â Samples - 21112] Avg - 0.000000]
|*5852487*|*Perf*|2996475| Â |_SDT-Head: Network transfer................... Â Â Â Â 93 Â Â Â Â Â Â Â Â 1111107304 Â 1.03 GB] Â ÂSamples - 21112] [Avg - 0.004405] l40.06 GBPH]
|*5852487*|*Perf*|2996475|Â
|*5852487*|*Perf*|2996475| Writer Pipeline Modules.MediaAgent]
|*5852487*|*Perf*|2996475| Â |_SDT-Tail: Wait to receive data from source.... Â Â Â 120 Â Â Â Â Â Â Â Â 1111164840 Â P1.03 GB] Â 7Samples - 21113] *Avg - 0.005684] 231.05 GBPH]
|*5852487*|*Perf*|2996475| Â |_SDT-Tail: Writer Tasks....................... Â Â Â Â 28 Â Â Â Â Â Â Â Â 1111107304 Â Â1.03 GB] Â Samples - 21112] Avg - 0.001326] [133.05 GBPH]
|*5852487*|*Perf*|2996475| Â Â |_DSBackup: Media Write...................... Â Â Â Â 8 Â Â Â Â Â Â Â Â 1110192223 Â .1.03 GB] Â465.28 GBPH]
|*5852487*|*Perf*|2996475|Â
|*5852487*|*Perf*|2996475| ----------------------------------------------------------------------------------------------------
Hello @Mike Struening,
Thank you for the analysis. In this case, are there any points that I could check with the media agents before opening a case with our networkers?Â
I am thinking of settings that could be checked on the media agents ?
Â
Unless you have any throttling in place, not likely. My initial concern was if you were somehow sending over the main network though you addressed that earlier.
Let me know what they find!!
Hello @Mike Struening,
Thank you for the analysis. In this case, are there any points that I could check with the media agents before opening a case with our networkers?Â
I am thinking of settings that could be checked on the media agents ?
Â
I don't think there is anything commvault configuration-wise that would cause such slow network performance. You could try toggle the auxcopy mode between network and disk optimized modes and see if it makes any difference. You only have to suspect the copy, change the setting and resume to test.
The rest will come down to benchmarking the system to help isolate the bottleneck - performance is always tricky, as it could be the local OS, network cards, switches/routers in the way, the destination device… you get the picture. So you have to perform some tests to help narrow down the problem.
You could try fiddle with TCP offload options, chimney, check the teaming mode, ensure drivers are up to date - try disable one network card and see if that helps. It sounds like receiving data is fine, so it could be this particular network segment or something is odd with the network teaming - depending on your load balancing mode, most non-switch assisted modes can only load balance transmits (round-robin between adapters) - you could try disabling teaming or one of those NIC’s to see if that is contributing to the performance slowness.
To try to isolate routing/network issues, you could try configure a network share (SMB) somewhere and copy some data there as a performance test, either through windows or a test copy. We also have the cloud test tool which can upload data to your hitachi object storage, and you could measure performance from these Media Agents vs other systems or network segments to help:
https://documentation.commvault.com/commvault/v11/article?p=9234.htm
Â
@thomas.S , thought you’d find this interesting:
Â
can someone please let me know how AUX copy job copies backup jobs ?
I could see our AUX copy is running since more than 10 days. But still seeing very old backup jobs in partially copied list.Â
How AUX copy picks up backup jobs for copying ?Â
Ideally older jobs should first copied.Â