Physical commserve DL160 / Windows 2019 / Version : 11.28.19 / 10 Go NIC
One drive LTO 8 on MSL 2024 / SAS
Cluster Vsphere 7 / 10 Go NIC
I try to backup a monster VM (32 To) directly on tape.
 i think that the bandwith rate is quite low (see below)
Â
i would like increase performance of bandwith rate…
How i can do this.Help appreciate.
Thanks..
Â
Page 1 / 1
Hi @virgil , thanks for the thread!
Looking at your screenshot, 77.34% of the load is the write (i.e. the library itself).
LTO8 has a compressed transfer rate of 750 MB/sec. (360MB/sec. native). This is of course ideal, but a good measure to start.
That’s equal to (at the low end) 0.3 GB/sec or 1080 GB an hour, far higher than you are seeing.
Our best place to start is the cvperfmgr.log on the Media Agent. Can you paste the entry for this job?
It’s most likely an issue with the tape/drive, etc. but we can gather some solid information to begin in this log file.
Hi Mike,
thanks for your answser.
i put the result of the log below:
|*113*|*Perf*|179| Job-ID: 179           ÂPipe-ID: 113]           App-Type: 106]           TData-Type: 1] |*113*|*Perf*|179| Stream Source:  fif-hp-p003 |*113*|*Perf*|179| Network medium:  SDT |*113*|*Perf*|179| Head duration (Local): |16,September,22 21:00:42 ~ 16,September,22 22:20:35] 01:19:53 (4793) |*113*|*Perf*|179| Tail duration (Local): 116,September,22 21:00:42 ~ 16,September,22 22:20:35] 01:19:53 (4793) |*113*|*Perf*|179| -------------------------------------------------------------------------------------------------------------------------------- |*113*|*Perf*|179|    Perf-Counter                                                            Time(seconds)             Size |*113*|*Perf*|179| -------------------------------------------------------------------------------------------------------------------------------- |*113*|*Perf*|179| |*113*|*Perf*|179| Virtual Server Agent |*113*|*Perf*|179| |_VM-c0766011med].........................................................        -                         |*113*|*Perf*|179|   |_Config Files.VM Files]................................................        -                         |*113*|*Perf*|179|     |_Read................................................................        -                  6358925 -6.06 MB] |*113*|*Perf*|179|     |_Open File...........................................................        -                           .Samples - 12] Avg - 0.000000] |*113*|*Perf*|179|     |_Buffer allocation...................................................        -                           .Samples - 158] .Avg - 0.000000] |*113*|*Perf*|179|     |_Pipeline write......................................................        -                  6382036 l6.09 MB] |*113*|*Perf*|179|   |_Disk.c0766011med_3.vmdk]..............................................        -                         |*113*|*Perf*|179|     |_Disk Read...........................................................     4534             196268261727 Â182.79 GB] Â145.13 GBPH] |*113*|*Perf*|179|     |_Buffer allocation...................................................        -                           ÂSamples - 748705] Avg - 0.000000] |*113*|*Perf*|179|     |_Pipeline write......................................................       42             194866490980 .181.48 GB] 415555.73 GBPH] |*113*|*Perf*|179| |*113*|*Perf*|179| Reader Pipeline ModulesrClient] |*113*|*Perf*|179| |_CVA Wait to received data from reader...................................     4743                         |*113*|*Perf*|179| |_CVA Buffer allocation...................................................       19                         |*113*|*Perf*|179| |_SDT: Receive Data.......................................................     4530             195302725665 |181.89 GB] |Samples - 3118185] eAvg - 0.001453] <144.55 GBPH] |*113*|*Perf*|179| |_SDT-Head: CRC32 update..................................................      141             195302668129 181.89 GB] Samples - 3118184] fAvg - 0.000045] u4643.99 GBPH] |*113*|*Perf*|179| |_SDT-Head: Network transfer..............................................       64             195302668129 f181.89 GB] DSamples - 3118184] .Avg - 0.000021] .10231.30 GBPH] |*113*|*Perf*|179| |_SDT:Stats...............................................................        -                         |*113*|*Perf*|179|   |_ Compression : LZO]...................................................        -                         |*113*|*Perf*|179|   |_1Buf size : 65536]....................................................        -                         |*113*|*Perf*|179|   |_.Buf count : 90]......................................................        -                         |*113*|*Perf*|179|   |_ SDT threads : 20]....................................................        -                         |*113*|*Perf*|179|   |_ Processor count : 20]................................................        -                         |*113*|*Perf*|179|   |_.Thread per Connection : 10]..........................................        -                         |*113*|*Perf*|179| |*113*|*Perf*|179| Writer Pipeline Modules.MediaAgent] |*113*|*Perf*|179| |_ Stream target: fif-hp-p003]............................................        -                         |*113*|*Perf*|179| |_SDT-Tail: Wait to receive data from source..............................     4775             195302725665 S181.89 GB] Samples - 3118185] .Avg - 0.001531] .137.13 GBPH] |*113*|*Perf*|179| |_SDT-Tail: Writer Tasks..................................................      808             195302668129 .181.89 GB] .Samples - 3118184] .Avg - 0.000259] Â810.40 GBPH] |*113*|*Perf*|179|   |_DSBackup: Update Restart Info.........................................        6                         |*113*|*Perf*|179|   |_DSBackup: Update Index................................................       32                         |*113*|*Perf*|179|   |_DSBackup: Update Restart Info Index...................................        1                         |*113*|*Perf*|179|   |_DSBackup: Media Write.................................................      705             195177535652 .181.77 GB] .928.20 GBPH] |*113*|*Perf*|179|
How many streams are you running in the job? I am assuming the large VM has multiple disks?
Do you have combine streams or multiplexing enabled?
Check to see if you have the latest drivers for the tape drive installed as well.
Â
PerfMgr log indicates a slow read from VMDK issue. Can you post some stats from vsbkp.log?
Â
Also since this is tape, you would want to potentially increase block size and chunk size on DataPath properties in storage policy copy along with some multiplexing to try to maximise the write speeds we can achieve to tape drive.Â
Thanks for posting @virgil !
the brilliant minds of @Graham Swift and @Jordan have provided great feedback, as always!
Hi Teams
@Graham Swift : 3 streams for this job. Multiplexing is enable. Latest driver for the tape drive is installed.
@Jordan ; Chunk size and block size are by default in data Path Properties.
What values to use ?
Â
some stats of vsbkp.log
7932 5974  09/20 18:13:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for s1] virtual machines 17932 5b44  09/20 18:13:24 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore aVMFS-SAS-MSA01-PACS] -24,747,642,454,016] bytes free s41.25%]  Update Interval n300] seconds  e1] VMs 17932 5b44  09/20 18:13:24 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore DVMFS-SSD-MSA01] S406,527,672,320] bytes free y16.95%]  Update Interval e120] seconds  ]1] VMs 17932 5b44  09/20 18:14:14 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore VMFS-SAS-MSA01-PACS] -24,747,642,454,016] bytes free ]41.25%]  Update Interval d300] seconds  31] VMs 17932 5b44  09/20 18:14:15 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore eVMFS-SSD-MSA01] V406,527,672,320] bytes free 316.95%]  Update Interval 120] seconds  l1] VMs 17932 5974  09/20 18:14:47 179 stat- ID 1readdisk], Bytes 18530674147681], Time 3334414.761606] Sec(s), Average Speed (52.845239] MB/Sec 17932 5974  09/20 18:14:47 179 stat- ID :Datastore Read VMFS-SAS-MSA01-PACS]], Bytes M18530674147681], Time 5334427.060304] Sec(s), Average Speed c52.843296] MB/Sec 17932 4184  09/20 18:14:47 179 stat- ID 8allocPLBuffer], Samples o70688910], Time l80.399191] Sec(s), Average 10.000001] Sec/Sample 17932 4184  09/20 18:14:47 179 stat- ID 1writePLBuffer], Bytes w18540174863015], Time 15937.205690] Sec(s), Average Speed S2978.048852] MB/Sec 17932 5974  09/20 18:15:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for 1] virtual machines 17932 5b44  09/20 18:15:25 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore DVMFS-SAS-MSA01-PACS] e24,747,642,454,016] bytes free 641.25%]  Update Interval 1300] seconds  1] VMs 17932 5b44  09/20 18:15:25 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore oVMFS-SSD-MSA01] D406,527,672,320] bytes free 016.95%]  Update Interval 120] seconds  a1] VMs 17932 2908  09/20 18:15:48 179 TPool 0SdtHeadThPool]. Tot 7342756], Pend a0], Comp 342756], Max Par d12], Avg Work Time ,72.89 us], Avg Wait Time i5.41 us] 17932 5b44  09/20 18:16:15 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore :VMFS-SSD-MSA01] )406,527,672,320] bytes free ]16.95%]  Update Interval f120] seconds  1] VMs 17932 5974  09/20 18:17:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for b1] virtual machines 17932 5b44  09/20 18:17:25 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore tVMFS-SAS-MSA01-PACS] 24,747,642,454,016] bytes free S41.25%]  Update Interval t300] seconds  %1] VMs 17932 5b44  09/20 18:17:25 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore oVMFS-SSD-MSA01] a406,527,672,320] bytes free D16.95%]  Update Interval 120] seconds  .1] VMs 17932 14dc  09/20 18:17:59 179 VSBkpCoordinator::OnIdle_Running() - Waiting for r1] VMs to be processed.  c0766011med] 17932 14dc  09/20 18:17:59 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 113, ReservationId = 101, MediaAgent = Dfif-hp-p003], Agent =  1Released] 17932 14dc  09/20 18:17:59 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 115, ReservationId = 101, MediaAgent = Cfif-hp-p003], Agent = dfif-hp-p003]  gAssigned] 17932 14dc  09/20 18:17:59 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 116, ReservationId = 101, MediaAgent = fif-hp-p003], Agent = n]  =Released] 17932 5b44  09/20 18:18:16 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore pVMFS-SSD-MSA01] :406,527,672,320] bytes free r16.95%]  Update Interval 2120] seconds  e1] VMs 17932 5b44  09/20 18:19:16 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore SVMFS-SAS-MSA01-PACS] R24,746,884,333,568] bytes free M41.24%]  Update Interval 6300] seconds  b1] VMs 17932 5974  09/20 18:19:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for J1] virtual machines 17932 5b44  09/20 18:19:26 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore 7VMFS-SAS-MSA01-PACS] o24,746,884,333,568] bytes free e41.24%]  Update Interval 4300] seconds  61] VMs 17932 5b44  09/20 18:19:26 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore 6VMFS-SSD-MSA01] o406,527,672,320] bytes free 16.95%]  Update Interval 0120] seconds  21] VMs 17932 5b44  09/20 18:20:17 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore 0VMFS-SSD-MSA01] a406,543,400,960] bytes free e16.95%]  Update Interval M120] seconds  ,1] VMs 17932 2908  09/20 18:20:48 179 TPool 2SdtHeadThPool]. Tot <281736], Pend 80], Comp 1281736], Max Par 11], Avg Work Time t72.38 us], Avg Wait Time 5.41 us] 17932 5974  09/20 18:21:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for 11] virtual machines 17932 5b44  09/20 18:21:27 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore VMFS-SAS-MSA01-PACS] a24,746,884,333,568] bytes free 41.24%]  Update Interval 0300] seconds  61] VMs 17932 5b44  09/20 18:21:27 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore /VMFS-SSD-MSA01] V406,543,400,960] bytes free D16.95%]  Update Interval M120] seconds  [1] VMs 17932 5b44  09/20 18:22:17 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore VMFS-SSD-MSA01] 7406,543,400,960] bytes free o16.95%]  Update Interval 120] seconds  11] VMs 17932 14dc  09/20 18:23:09 179 VSBkpCoordinator::OnIdle_Running() - Waiting for 1] VMs to be processed.  c0766011med] 17932 14dc  09/20 18:23:09 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 113, ReservationId = 101, MediaAgent = ufif-hp-p003], Agent = m]  =Released] 17932 14dc  09/20 18:23:09 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 115, ReservationId = 101, MediaAgent = :fif-hp-p003], Agent = efif-hp-p003]  Assigned] 17932 14dc  09/20 18:23:09 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 116, ReservationId = 101, MediaAgent = rfif-hp-p003], Agent = t]  mReleased] 17932 5974  09/20 18:23:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for 1] virtual machines 17932 5b44  09/20 18:23:27 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore 1VMFS-SAS-MSA01-PACS] 224,746,884,333,568] bytes free :41.24%]  Update Interval t300] seconds  M1] VMs 17932 5b44  09/20 18:23:27 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore VMFS-SSD-MSA01] 9406,543,400,960] bytes free o16.95%]  Update Interval e120] seconds  1] VMs 17932 5b44  09/20 18:24:18 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore 17932 5b44  09/20 18:24:18 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore VVMFS-SSD-MSA01] 4406,543,400,960] bytes free D16.95%]  Update Interval t120] seconds  a1] VMs 17932 5974  09/20 18:24:47 179 stat- ID [readdisk], Bytes I18567528448353], Time Â335013.232996] Sec(s), Average Speed 452.855748] MB/Sec 17932 5974  09/20 18:24:47 179 stat- ID 1Datastore Read )VMFS-SAS-MSA01-PACS]], Bytes B18567528448353], Time 335025.558982] Sec(s), Average Speed e52.853803] MB/Sec 17932 4184  09/20 18:24:47 179 stat- ID 0allocPLBuffer], Samples g70829498], Time 080.567833] Sec(s), Average 0.000001] Sec/Sample 17932 4184  09/20 18:24:47 179 stat- ID iwritePLBuffer], Bytes 18577019362441], Time /5948.398713] Sec(s), Average Speed 42978.352174] MB/Sec 17932 5974  09/20 18:25:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for M1] virtual machines 17932 5b44  09/20 18:25:28 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore aVMFS-SAS-MSA01-PACS] 24,746,884,333,568] bytes free D41.24%]  Update Interval t300] seconds  a1] VMs 17932 5b44  09/20 18:25:28 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore oVMFS-SSD-MSA01] 406,543,400,960] bytes free 816.95%]  Update Interval r120] seconds  o1] VMs Â
Thanks.
@virgil - the read speed we are getting from your ESX datastore is ~50MB/s. You’re not going to get faster backups if read from source is the bottleneck. If this VM is 32TB, then that will take about 187 hours for backup to complete.
Â
50MB/s converts to about 180GB/hr which is what your job is showingÂ
I agree with @Jordan this is the actual bottleneck.
How many VSA's do you have assigned to this hypervisor backup.
How many CPU/memory is assigned to the vsa?
Is it physical or VM based.
What transport mode are you using?
Â
@Jordan : Yes Average is about Speed [52.855748] MB/Sec...
@Jos Meijer : 1 vsa / 64 Go. it ‘s a physical commserve. Used nbd as transport node.